00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 117 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3295 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.140 Using shallow fetch with depth 1 00:00:00.140 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.140 > git --version # timeout=10 00:00:00.168 > git --version # 'git version 2.39.2' 00:00:00.168 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.193 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.960 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.968 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.978 Checking out Revision 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b (FETCH_HEAD) 00:00:05.978 > git config core.sparsecheckout # timeout=10 00:00:05.987 > git read-tree -mu HEAD # timeout=10 00:00:06.000 > git checkout -f 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=5 00:00:06.015 Commit message: "jjb/jobs: add SPDK_TEST_SETUP flag into configuration" 00:00:06.015 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:06.086 [Pipeline] Start of Pipeline 00:00:06.097 [Pipeline] library 00:00:06.099 Loading library shm_lib@master 00:00:06.099 Library shm_lib@master is cached. Copying from home. 00:00:06.113 [Pipeline] node 00:00:06.121 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.122 [Pipeline] { 00:00:06.131 [Pipeline] catchError 00:00:06.132 [Pipeline] { 00:00:06.140 [Pipeline] wrap 00:00:06.146 [Pipeline] { 00:00:06.151 [Pipeline] stage 00:00:06.153 [Pipeline] { (Prologue) 00:00:06.307 [Pipeline] sh 00:00:06.587 + logger -p user.info -t JENKINS-CI 00:00:06.602 [Pipeline] echo 00:00:06.603 Node: GP11 00:00:06.609 [Pipeline] sh 00:00:06.913 [Pipeline] setCustomBuildProperty 00:00:06.923 [Pipeline] echo 00:00:06.925 Cleanup processes 00:00:06.931 [Pipeline] sh 00:00:07.215 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.215 3294974 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.230 [Pipeline] sh 00:00:07.520 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.520 ++ grep -v 'sudo pgrep' 00:00:07.520 ++ awk '{print $1}' 00:00:07.520 + sudo kill -9 00:00:07.520 + true 00:00:07.536 [Pipeline] cleanWs 00:00:07.547 [WS-CLEANUP] Deleting project workspace... 00:00:07.547 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.554 [WS-CLEANUP] done 00:00:07.559 [Pipeline] setCustomBuildProperty 00:00:07.574 [Pipeline] sh 00:00:07.861 + sudo git config --global --replace-all safe.directory '*' 00:00:07.952 [Pipeline] httpRequest 00:00:07.994 [Pipeline] echo 00:00:07.995 Sorcerer 10.211.164.101 is alive 00:00:08.003 [Pipeline] httpRequest 00:00:08.008 HttpMethod: GET 00:00:08.009 URL: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:08.010 Sending request to url: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:08.035 Response Code: HTTP/1.1 200 OK 00:00:08.036 Success: Status code 200 is in the accepted range: 200,404 00:00:08.036 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:36.337 [Pipeline] sh 00:00:36.622 + tar --no-same-owner -xf jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:36.638 [Pipeline] httpRequest 00:00:36.671 [Pipeline] echo 00:00:36.672 Sorcerer 10.211.164.101 is alive 00:00:36.680 [Pipeline] httpRequest 00:00:36.685 HttpMethod: GET 00:00:36.686 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:36.686 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:36.713 Response Code: HTTP/1.1 200 OK 00:00:36.714 Success: Status code 200 is in the accepted range: 200,404 00:00:36.715 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:10.302 [Pipeline] sh 00:01:10.590 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:13.900 [Pipeline] sh 00:01:14.185 + git -C spdk log --oneline -n5 00:01:14.185 241d0f3c9 test: fix dpdk builds on ubuntu24 00:01:14.185 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:01:14.185 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:14.185 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:14.185 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:14.204 [Pipeline] withCredentials 00:01:14.216 > git --version # timeout=10 00:01:14.227 > git --version # 'git version 2.39.2' 00:01:14.247 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:14.249 [Pipeline] { 00:01:14.259 [Pipeline] retry 00:01:14.261 [Pipeline] { 00:01:14.278 [Pipeline] sh 00:01:14.565 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:14.577 [Pipeline] } 00:01:14.599 [Pipeline] // retry 00:01:14.605 [Pipeline] } 00:01:14.625 [Pipeline] // withCredentials 00:01:14.636 [Pipeline] httpRequest 00:01:14.654 [Pipeline] echo 00:01:14.655 Sorcerer 10.211.164.101 is alive 00:01:14.666 [Pipeline] httpRequest 00:01:14.681 HttpMethod: GET 00:01:14.682 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:14.683 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:14.684 Response Code: HTTP/1.1 200 OK 00:01:14.684 Success: Status code 200 is in the accepted range: 200,404 00:01:14.685 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:19.169 [Pipeline] sh 00:01:19.454 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:21.369 [Pipeline] sh 00:01:21.652 + git -C dpdk log --oneline -n5 00:01:21.652 caf0f5d395 version: 22.11.4 00:01:21.652 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:21.653 dc9c799c7d vhost: fix missing spinlock unlock 00:01:21.653 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:21.653 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:21.666 [Pipeline] } 00:01:21.676 [Pipeline] // stage 00:01:21.681 [Pipeline] stage 00:01:21.682 [Pipeline] { (Prepare) 00:01:21.693 [Pipeline] writeFile 00:01:21.702 [Pipeline] sh 00:01:21.976 + logger -p user.info -t JENKINS-CI 00:01:21.988 [Pipeline] sh 00:01:22.267 + logger -p user.info -t JENKINS-CI 00:01:22.278 [Pipeline] sh 00:01:22.562 + cat autorun-spdk.conf 00:01:22.562 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.562 SPDK_TEST_NVMF=1 00:01:22.562 SPDK_TEST_NVME_CLI=1 00:01:22.562 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.562 SPDK_TEST_NVMF_NICS=e810 00:01:22.562 SPDK_TEST_VFIOUSER=1 00:01:22.562 SPDK_RUN_UBSAN=1 00:01:22.562 NET_TYPE=phy 00:01:22.562 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:22.562 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.569 RUN_NIGHTLY=1 00:01:22.574 [Pipeline] readFile 00:01:22.599 [Pipeline] withEnv 00:01:22.601 [Pipeline] { 00:01:22.615 [Pipeline] sh 00:01:22.900 + set -ex 00:01:22.900 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:22.900 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.900 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.900 ++ SPDK_TEST_NVMF=1 00:01:22.900 ++ SPDK_TEST_NVME_CLI=1 00:01:22.900 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.900 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.900 ++ SPDK_TEST_VFIOUSER=1 00:01:22.900 ++ SPDK_RUN_UBSAN=1 00:01:22.900 ++ NET_TYPE=phy 00:01:22.900 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:22.900 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.900 ++ RUN_NIGHTLY=1 00:01:22.900 + case $SPDK_TEST_NVMF_NICS in 00:01:22.900 + DRIVERS=ice 00:01:22.900 + [[ tcp == \r\d\m\a ]] 00:01:22.900 + [[ -n ice ]] 00:01:22.900 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:22.900 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:22.900 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:22.900 rmmod: ERROR: Module irdma is not currently loaded 00:01:22.900 rmmod: ERROR: Module i40iw is not currently loaded 00:01:22.900 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:22.900 + true 00:01:22.900 + for D in $DRIVERS 00:01:22.900 + sudo modprobe ice 00:01:22.900 + exit 0 00:01:22.910 [Pipeline] } 00:01:22.931 [Pipeline] // withEnv 00:01:22.937 [Pipeline] } 00:01:22.955 [Pipeline] // stage 00:01:22.965 [Pipeline] catchError 00:01:22.967 [Pipeline] { 00:01:22.985 [Pipeline] timeout 00:01:22.985 Timeout set to expire in 50 min 00:01:22.987 [Pipeline] { 00:01:23.003 [Pipeline] stage 00:01:23.005 [Pipeline] { (Tests) 00:01:23.021 [Pipeline] sh 00:01:23.304 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.304 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.304 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.304 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:23.304 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.304 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:23.304 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:23.304 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:23.304 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:23.304 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:23.304 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:23.304 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.304 + source /etc/os-release 00:01:23.304 ++ NAME='Fedora Linux' 00:01:23.304 ++ VERSION='38 (Cloud Edition)' 00:01:23.304 ++ ID=fedora 00:01:23.304 ++ VERSION_ID=38 00:01:23.304 ++ VERSION_CODENAME= 00:01:23.304 ++ PLATFORM_ID=platform:f38 00:01:23.304 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:23.304 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.304 ++ LOGO=fedora-logo-icon 00:01:23.304 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:23.304 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.304 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:23.304 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.304 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.305 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.305 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:23.305 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.305 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:23.305 ++ SUPPORT_END=2024-05-14 00:01:23.305 ++ VARIANT='Cloud Edition' 00:01:23.305 ++ VARIANT_ID=cloud 00:01:23.305 + uname -a 00:01:23.305 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:23.305 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:24.238 Hugepages 00:01:24.238 node hugesize free / total 00:01:24.238 node0 1048576kB 0 / 0 00:01:24.238 node0 2048kB 0 / 0 00:01:24.238 node1 1048576kB 0 / 0 00:01:24.238 node1 2048kB 0 / 0 00:01:24.238 00:01:24.238 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.238 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:24.238 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:24.238 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:24.238 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:24.238 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:24.238 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:24.238 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:24.238 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:24.238 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:24.238 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:24.238 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:24.238 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:24.238 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:24.238 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:24.238 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:24.238 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:24.238 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:24.238 + rm -f /tmp/spdk-ld-path 00:01:24.238 + source autorun-spdk.conf 00:01:24.238 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.238 ++ SPDK_TEST_NVMF=1 00:01:24.238 ++ SPDK_TEST_NVME_CLI=1 00:01:24.238 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.238 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.238 ++ SPDK_TEST_VFIOUSER=1 00:01:24.238 ++ SPDK_RUN_UBSAN=1 00:01:24.238 ++ NET_TYPE=phy 00:01:24.238 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:24.238 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.238 ++ RUN_NIGHTLY=1 00:01:24.238 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.238 + [[ -n '' ]] 00:01:24.238 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.238 + for M in /var/spdk/build-*-manifest.txt 00:01:24.238 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.238 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.238 + for M in /var/spdk/build-*-manifest.txt 00:01:24.238 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.238 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.238 ++ uname 00:01:24.238 + [[ Linux == \L\i\n\u\x ]] 00:01:24.238 + sudo dmesg -T 00:01:24.238 + sudo dmesg --clear 00:01:24.497 + dmesg_pid=3295680 00:01:24.497 + [[ Fedora Linux == FreeBSD ]] 00:01:24.497 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.497 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.497 + sudo dmesg -Tw 00:01:24.497 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.497 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.497 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.497 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.497 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.497 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.497 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.497 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.497 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.497 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.497 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.497 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.497 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.497 Test configuration: 00:01:24.497 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.497 SPDK_TEST_NVMF=1 00:01:24.497 SPDK_TEST_NVME_CLI=1 00:01:24.497 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.497 SPDK_TEST_NVMF_NICS=e810 00:01:24.497 SPDK_TEST_VFIOUSER=1 00:01:24.497 SPDK_RUN_UBSAN=1 00:01:24.497 NET_TYPE=phy 00:01:24.497 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:24.497 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.497 RUN_NIGHTLY=1 18:34:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:24.497 18:34:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.497 18:34:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.497 18:34:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.497 18:34:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.497 18:34:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.497 18:34:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.497 18:34:36 -- paths/export.sh@5 -- $ export PATH 00:01:24.497 18:34:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.497 18:34:36 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:24.497 18:34:36 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:24.497 18:34:36 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721925276.XXXXXX 00:01:24.497 18:34:36 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721925276.NsscEM 00:01:24.497 18:34:36 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:24.497 18:34:36 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:24.497 18:34:36 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.498 18:34:36 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:24.498 18:34:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.498 18:34:36 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.498 18:34:36 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:24.498 18:34:36 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:24.498 18:34:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.498 18:34:36 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:24.498 18:34:36 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:01:24.498 18:34:36 -- pm/common@17 -- $ local monitor 00:01:24.498 18:34:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.498 18:34:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.498 18:34:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.498 18:34:36 -- pm/common@21 -- $ date +%s 00:01:24.498 18:34:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.498 18:34:36 -- pm/common@21 -- $ date +%s 00:01:24.498 18:34:36 -- pm/common@25 -- $ sleep 1 00:01:24.498 18:34:36 -- pm/common@21 -- $ date +%s 00:01:24.498 18:34:36 -- pm/common@21 -- $ date +%s 00:01:24.498 18:34:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721925276 00:01:24.498 18:34:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721925276 00:01:24.498 18:34:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721925276 00:01:24.498 18:34:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721925276 00:01:24.498 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721925276_collect-vmstat.pm.log 00:01:24.498 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721925276_collect-cpu-load.pm.log 00:01:24.498 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721925276_collect-cpu-temp.pm.log 00:01:24.498 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721925276_collect-bmc-pm.bmc.pm.log 00:01:25.436 18:34:37 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:01:25.436 18:34:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.436 18:34:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.436 18:34:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.436 18:34:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.436 Thu Jul 25 04:34:37 PM UTC 2024 00:01:25.436 18:34:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.436 v24.05-15-g241d0f3c9 00:01:25.436 18:34:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.436 18:34:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.436 18:34:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.436 18:34:37 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:25.436 18:34:37 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:25.436 18:34:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.436 ************************************ 00:01:25.436 START TEST ubsan 00:01:25.436 ************************************ 00:01:25.436 18:34:37 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:25.436 using ubsan 00:01:25.436 00:01:25.436 real 0m0.000s 00:01:25.436 user 0m0.000s 00:01:25.436 sys 0m0.000s 00:01:25.436 18:34:37 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:25.436 18:34:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.436 ************************************ 00:01:25.436 END TEST ubsan 00:01:25.436 ************************************ 00:01:25.436 18:34:37 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:25.436 18:34:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:25.436 18:34:37 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:25.436 18:34:37 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:25.436 18:34:37 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:25.436 18:34:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.436 ************************************ 00:01:25.436 START TEST build_native_dpdk 00:01:25.436 ************************************ 00:01:25.436 18:34:37 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:25.436 caf0f5d395 version: 22.11.4 00:01:25.436 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:25.436 dc9c799c7d vhost: fix missing spinlock unlock 00:01:25.436 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:25.436 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:25.436 patching file config/rte_config.h 00:01:25.436 Hunk #1 succeeded at 60 (offset 1 line). 00:01:25.436 18:34:37 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:25.436 18:34:37 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:25.694 18:34:37 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:25.694 18:34:37 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:25.694 patching file lib/pcapng/rte_pcapng.c 00:01:25.694 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:25.694 18:34:37 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:25.694 18:34:37 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:25.694 18:34:37 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:25.694 18:34:37 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:25.694 18:34:37 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.883 The Meson build system 00:01:29.883 Version: 1.3.1 00:01:29.883 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.883 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:29.883 Build type: native build 00:01:29.883 Program cat found: YES (/usr/bin/cat) 00:01:29.883 Project name: DPDK 00:01:29.883 Project version: 22.11.4 00:01:29.883 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:29.883 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:29.883 Host machine cpu family: x86_64 00:01:29.883 Host machine cpu: x86_64 00:01:29.883 Message: ## Building in Developer Mode ## 00:01:29.883 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.883 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:29.883 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.883 Program objdump found: YES (/usr/bin/objdump) 00:01:29.883 Program python3 found: YES (/usr/bin/python3) 00:01:29.883 Program cat found: YES (/usr/bin/cat) 00:01:29.883 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:29.883 Checking for size of "void *" : 8 00:01:29.883 Checking for size of "void *" : 8 (cached) 00:01:29.883 Library m found: YES 00:01:29.883 Library numa found: YES 00:01:29.883 Has header "numaif.h" : YES 00:01:29.883 Library fdt found: NO 00:01:29.883 Library execinfo found: NO 00:01:29.883 Has header "execinfo.h" : YES 00:01:29.883 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.883 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.883 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.883 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.883 Run-time dependency openssl found: YES 3.0.9 00:01:29.883 Run-time dependency libpcap found: YES 1.10.4 00:01:29.883 Has header "pcap.h" with dependency libpcap: YES 00:01:29.883 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.883 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.883 Compiler for C supports arguments -Wformat: YES 00:01:29.883 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.883 Compiler for C supports arguments -Wformat-security: NO 00:01:29.883 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.883 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.883 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.883 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.883 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.883 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.883 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.883 Compiler for C supports arguments -Wundef: YES 00:01:29.883 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.883 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.883 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.883 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.883 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.883 Compiler for C supports arguments -mavx512f: YES 00:01:29.883 Checking if "AVX512 checking" compiles: YES 00:01:29.883 Fetching value of define "__SSE4_2__" : 1 00:01:29.883 Fetching value of define "__AES__" : 1 00:01:29.883 Fetching value of define "__AVX__" : 1 00:01:29.883 Fetching value of define "__AVX2__" : (undefined) 00:01:29.884 Fetching value of define "__AVX512BW__" : (undefined) 00:01:29.884 Fetching value of define "__AVX512CD__" : (undefined) 00:01:29.884 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:29.884 Fetching value of define "__AVX512F__" : (undefined) 00:01:29.884 Fetching value of define "__AVX512VL__" : (undefined) 00:01:29.884 Fetching value of define "__PCLMUL__" : 1 00:01:29.884 Fetching value of define "__RDRND__" : 1 00:01:29.884 Fetching value of define "__RDSEED__" : (undefined) 00:01:29.884 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:29.884 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.884 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.884 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.884 Checking for function "getentropy" : YES 00:01:29.884 Message: lib/eal: Defining dependency "eal" 00:01:29.884 Message: lib/ring: Defining dependency "ring" 00:01:29.884 Message: lib/rcu: Defining dependency "rcu" 00:01:29.884 Message: lib/mempool: Defining dependency "mempool" 00:01:29.884 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.884 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.884 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.884 Compiler for C supports arguments -mpclmul: YES 00:01:29.884 Compiler for C supports arguments -maes: YES 00:01:29.884 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.884 Compiler for C supports arguments -mavx512bw: YES 00:01:29.884 Compiler for C supports arguments -mavx512dq: YES 00:01:29.884 Compiler for C supports arguments -mavx512vl: YES 00:01:29.884 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.884 Compiler for C supports arguments -mavx2: YES 00:01:29.884 Compiler for C supports arguments -mavx: YES 00:01:29.884 Message: lib/net: Defining dependency "net" 00:01:29.884 Message: lib/meter: Defining dependency "meter" 00:01:29.884 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.884 Message: lib/pci: Defining dependency "pci" 00:01:29.884 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.884 Message: lib/metrics: Defining dependency "metrics" 00:01:29.884 Message: lib/hash: Defining dependency "hash" 00:01:29.884 Message: lib/timer: Defining dependency "timer" 00:01:29.884 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:29.884 Compiler for C supports arguments -mavx2: YES (cached) 00:01:29.884 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.884 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:29.884 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:29.884 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:29.884 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:29.884 Message: lib/acl: Defining dependency "acl" 00:01:29.884 Message: lib/bbdev: Defining dependency "bbdev" 00:01:29.884 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:29.884 Run-time dependency libelf found: YES 0.190 00:01:29.884 Message: lib/bpf: Defining dependency "bpf" 00:01:29.884 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:29.884 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.884 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.884 Message: lib/distributor: Defining dependency "distributor" 00:01:29.884 Message: lib/efd: Defining dependency "efd" 00:01:29.884 Message: lib/eventdev: Defining dependency "eventdev" 00:01:29.884 Message: lib/gpudev: Defining dependency "gpudev" 00:01:29.884 Message: lib/gro: Defining dependency "gro" 00:01:29.884 Message: lib/gso: Defining dependency "gso" 00:01:29.884 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:29.884 Message: lib/jobstats: Defining dependency "jobstats" 00:01:29.884 Message: lib/latencystats: Defining dependency "latencystats" 00:01:29.884 Message: lib/lpm: Defining dependency "lpm" 00:01:29.884 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.884 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:29.884 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:29.884 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:29.884 Message: lib/member: Defining dependency "member" 00:01:29.884 Message: lib/pcapng: Defining dependency "pcapng" 00:01:29.884 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.884 Message: lib/power: Defining dependency "power" 00:01:29.884 Message: lib/rawdev: Defining dependency "rawdev" 00:01:29.884 Message: lib/regexdev: Defining dependency "regexdev" 00:01:29.884 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.884 Message: lib/rib: Defining dependency "rib" 00:01:29.884 Message: lib/reorder: Defining dependency "reorder" 00:01:29.884 Message: lib/sched: Defining dependency "sched" 00:01:29.884 Message: lib/security: Defining dependency "security" 00:01:29.884 Message: lib/stack: Defining dependency "stack" 00:01:29.884 Has header "linux/userfaultfd.h" : YES 00:01:29.884 Message: lib/vhost: Defining dependency "vhost" 00:01:29.884 Message: lib/ipsec: Defining dependency "ipsec" 00:01:29.884 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.884 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:29.884 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:29.884 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.884 Message: lib/fib: Defining dependency "fib" 00:01:29.884 Message: lib/port: Defining dependency "port" 00:01:29.884 Message: lib/pdump: Defining dependency "pdump" 00:01:29.884 Message: lib/table: Defining dependency "table" 00:01:29.884 Message: lib/pipeline: Defining dependency "pipeline" 00:01:29.884 Message: lib/graph: Defining dependency "graph" 00:01:29.884 Message: lib/node: Defining dependency "node" 00:01:29.884 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.884 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.884 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.884 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.884 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:29.884 Compiler for C supports arguments -Wno-unused-value: YES 00:01:31.271 Compiler for C supports arguments -Wno-format: YES 00:01:31.271 Compiler for C supports arguments -Wno-format-security: YES 00:01:31.271 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:31.271 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:31.271 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:31.271 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:31.271 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:31.271 Compiler for C supports arguments -mavx2: YES (cached) 00:01:31.271 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:31.271 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.271 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:31.271 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:31.271 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:31.271 Program doxygen found: YES (/usr/bin/doxygen) 00:01:31.271 Configuring doxy-api.conf using configuration 00:01:31.271 Program sphinx-build found: NO 00:01:31.271 Configuring rte_build_config.h using configuration 00:01:31.271 Message: 00:01:31.271 ================= 00:01:31.271 Applications Enabled 00:01:31.271 ================= 00:01:31.271 00:01:31.271 apps: 00:01:31.271 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:31.271 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:31.271 test-security-perf, 00:01:31.271 00:01:31.271 Message: 00:01:31.271 ================= 00:01:31.271 Libraries Enabled 00:01:31.271 ================= 00:01:31.271 00:01:31.271 libs: 00:01:31.271 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:31.271 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:31.271 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:31.271 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:31.271 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:31.271 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:31.271 table, pipeline, graph, node, 00:01:31.271 00:01:31.271 Message: 00:01:31.271 =============== 00:01:31.271 Drivers Enabled 00:01:31.271 =============== 00:01:31.271 00:01:31.271 common: 00:01:31.271 00:01:31.271 bus: 00:01:31.271 pci, vdev, 00:01:31.271 mempool: 00:01:31.271 ring, 00:01:31.271 dma: 00:01:31.271 00:01:31.271 net: 00:01:31.271 i40e, 00:01:31.271 raw: 00:01:31.271 00:01:31.271 crypto: 00:01:31.271 00:01:31.271 compress: 00:01:31.271 00:01:31.271 regex: 00:01:31.271 00:01:31.271 vdpa: 00:01:31.271 00:01:31.271 event: 00:01:31.271 00:01:31.271 baseband: 00:01:31.271 00:01:31.271 gpu: 00:01:31.271 00:01:31.271 00:01:31.271 Message: 00:01:31.271 ================= 00:01:31.271 Content Skipped 00:01:31.271 ================= 00:01:31.271 00:01:31.271 apps: 00:01:31.271 00:01:31.271 libs: 00:01:31.271 kni: explicitly disabled via build config (deprecated lib) 00:01:31.271 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:31.271 00:01:31.271 drivers: 00:01:31.271 common/cpt: not in enabled drivers build config 00:01:31.271 common/dpaax: not in enabled drivers build config 00:01:31.271 common/iavf: not in enabled drivers build config 00:01:31.271 common/idpf: not in enabled drivers build config 00:01:31.271 common/mvep: not in enabled drivers build config 00:01:31.271 common/octeontx: not in enabled drivers build config 00:01:31.271 bus/auxiliary: not in enabled drivers build config 00:01:31.271 bus/dpaa: not in enabled drivers build config 00:01:31.271 bus/fslmc: not in enabled drivers build config 00:01:31.271 bus/ifpga: not in enabled drivers build config 00:01:31.271 bus/vmbus: not in enabled drivers build config 00:01:31.271 common/cnxk: not in enabled drivers build config 00:01:31.271 common/mlx5: not in enabled drivers build config 00:01:31.271 common/qat: not in enabled drivers build config 00:01:31.271 common/sfc_efx: not in enabled drivers build config 00:01:31.271 mempool/bucket: not in enabled drivers build config 00:01:31.271 mempool/cnxk: not in enabled drivers build config 00:01:31.271 mempool/dpaa: not in enabled drivers build config 00:01:31.271 mempool/dpaa2: not in enabled drivers build config 00:01:31.271 mempool/octeontx: not in enabled drivers build config 00:01:31.271 mempool/stack: not in enabled drivers build config 00:01:31.271 dma/cnxk: not in enabled drivers build config 00:01:31.271 dma/dpaa: not in enabled drivers build config 00:01:31.271 dma/dpaa2: not in enabled drivers build config 00:01:31.271 dma/hisilicon: not in enabled drivers build config 00:01:31.271 dma/idxd: not in enabled drivers build config 00:01:31.271 dma/ioat: not in enabled drivers build config 00:01:31.271 dma/skeleton: not in enabled drivers build config 00:01:31.271 net/af_packet: not in enabled drivers build config 00:01:31.271 net/af_xdp: not in enabled drivers build config 00:01:31.271 net/ark: not in enabled drivers build config 00:01:31.271 net/atlantic: not in enabled drivers build config 00:01:31.271 net/avp: not in enabled drivers build config 00:01:31.271 net/axgbe: not in enabled drivers build config 00:01:31.271 net/bnx2x: not in enabled drivers build config 00:01:31.271 net/bnxt: not in enabled drivers build config 00:01:31.271 net/bonding: not in enabled drivers build config 00:01:31.271 net/cnxk: not in enabled drivers build config 00:01:31.271 net/cxgbe: not in enabled drivers build config 00:01:31.271 net/dpaa: not in enabled drivers build config 00:01:31.271 net/dpaa2: not in enabled drivers build config 00:01:31.271 net/e1000: not in enabled drivers build config 00:01:31.271 net/ena: not in enabled drivers build config 00:01:31.271 net/enetc: not in enabled drivers build config 00:01:31.271 net/enetfec: not in enabled drivers build config 00:01:31.271 net/enic: not in enabled drivers build config 00:01:31.271 net/failsafe: not in enabled drivers build config 00:01:31.271 net/fm10k: not in enabled drivers build config 00:01:31.271 net/gve: not in enabled drivers build config 00:01:31.271 net/hinic: not in enabled drivers build config 00:01:31.271 net/hns3: not in enabled drivers build config 00:01:31.271 net/iavf: not in enabled drivers build config 00:01:31.271 net/ice: not in enabled drivers build config 00:01:31.271 net/idpf: not in enabled drivers build config 00:01:31.271 net/igc: not in enabled drivers build config 00:01:31.271 net/ionic: not in enabled drivers build config 00:01:31.271 net/ipn3ke: not in enabled drivers build config 00:01:31.271 net/ixgbe: not in enabled drivers build config 00:01:31.272 net/kni: not in enabled drivers build config 00:01:31.272 net/liquidio: not in enabled drivers build config 00:01:31.272 net/mana: not in enabled drivers build config 00:01:31.272 net/memif: not in enabled drivers build config 00:01:31.272 net/mlx4: not in enabled drivers build config 00:01:31.272 net/mlx5: not in enabled drivers build config 00:01:31.272 net/mvneta: not in enabled drivers build config 00:01:31.272 net/mvpp2: not in enabled drivers build config 00:01:31.272 net/netvsc: not in enabled drivers build config 00:01:31.272 net/nfb: not in enabled drivers build config 00:01:31.272 net/nfp: not in enabled drivers build config 00:01:31.272 net/ngbe: not in enabled drivers build config 00:01:31.272 net/null: not in enabled drivers build config 00:01:31.272 net/octeontx: not in enabled drivers build config 00:01:31.272 net/octeon_ep: not in enabled drivers build config 00:01:31.272 net/pcap: not in enabled drivers build config 00:01:31.272 net/pfe: not in enabled drivers build config 00:01:31.272 net/qede: not in enabled drivers build config 00:01:31.272 net/ring: not in enabled drivers build config 00:01:31.272 net/sfc: not in enabled drivers build config 00:01:31.272 net/softnic: not in enabled drivers build config 00:01:31.272 net/tap: not in enabled drivers build config 00:01:31.272 net/thunderx: not in enabled drivers build config 00:01:31.272 net/txgbe: not in enabled drivers build config 00:01:31.272 net/vdev_netvsc: not in enabled drivers build config 00:01:31.272 net/vhost: not in enabled drivers build config 00:01:31.272 net/virtio: not in enabled drivers build config 00:01:31.272 net/vmxnet3: not in enabled drivers build config 00:01:31.272 raw/cnxk_bphy: not in enabled drivers build config 00:01:31.272 raw/cnxk_gpio: not in enabled drivers build config 00:01:31.272 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:31.272 raw/ifpga: not in enabled drivers build config 00:01:31.272 raw/ntb: not in enabled drivers build config 00:01:31.272 raw/skeleton: not in enabled drivers build config 00:01:31.272 crypto/armv8: not in enabled drivers build config 00:01:31.272 crypto/bcmfs: not in enabled drivers build config 00:01:31.272 crypto/caam_jr: not in enabled drivers build config 00:01:31.272 crypto/ccp: not in enabled drivers build config 00:01:31.272 crypto/cnxk: not in enabled drivers build config 00:01:31.272 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.272 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.272 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.272 crypto/mlx5: not in enabled drivers build config 00:01:31.272 crypto/mvsam: not in enabled drivers build config 00:01:31.272 crypto/nitrox: not in enabled drivers build config 00:01:31.272 crypto/null: not in enabled drivers build config 00:01:31.272 crypto/octeontx: not in enabled drivers build config 00:01:31.272 crypto/openssl: not in enabled drivers build config 00:01:31.272 crypto/scheduler: not in enabled drivers build config 00:01:31.272 crypto/uadk: not in enabled drivers build config 00:01:31.272 crypto/virtio: not in enabled drivers build config 00:01:31.272 compress/isal: not in enabled drivers build config 00:01:31.272 compress/mlx5: not in enabled drivers build config 00:01:31.272 compress/octeontx: not in enabled drivers build config 00:01:31.272 compress/zlib: not in enabled drivers build config 00:01:31.272 regex/mlx5: not in enabled drivers build config 00:01:31.272 regex/cn9k: not in enabled drivers build config 00:01:31.272 vdpa/ifc: not in enabled drivers build config 00:01:31.272 vdpa/mlx5: not in enabled drivers build config 00:01:31.272 vdpa/sfc: not in enabled drivers build config 00:01:31.272 event/cnxk: not in enabled drivers build config 00:01:31.272 event/dlb2: not in enabled drivers build config 00:01:31.272 event/dpaa: not in enabled drivers build config 00:01:31.272 event/dpaa2: not in enabled drivers build config 00:01:31.272 event/dsw: not in enabled drivers build config 00:01:31.272 event/opdl: not in enabled drivers build config 00:01:31.272 event/skeleton: not in enabled drivers build config 00:01:31.272 event/sw: not in enabled drivers build config 00:01:31.272 event/octeontx: not in enabled drivers build config 00:01:31.272 baseband/acc: not in enabled drivers build config 00:01:31.272 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:31.272 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:31.272 baseband/la12xx: not in enabled drivers build config 00:01:31.272 baseband/null: not in enabled drivers build config 00:01:31.272 baseband/turbo_sw: not in enabled drivers build config 00:01:31.272 gpu/cuda: not in enabled drivers build config 00:01:31.272 00:01:31.272 00:01:31.272 Build targets in project: 316 00:01:31.272 00:01:31.272 DPDK 22.11.4 00:01:31.272 00:01:31.272 User defined options 00:01:31.272 libdir : lib 00:01:31.272 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.272 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:31.272 c_link_args : 00:01:31.272 enable_docs : false 00:01:31.272 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:31.272 enable_kmods : false 00:01:31.272 machine : native 00:01:31.272 tests : false 00:01:31.272 00:01:31.272 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.272 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:31.272 18:34:42 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:31.272 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:31.272 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:31.272 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:31.272 [3/745] Generating lib/rte_telemetry_def with a custom command 00:01:31.272 [4/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.272 [5/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:31.272 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.272 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.272 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.272 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:31.537 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.537 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.537 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.537 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.537 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.537 [15/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:31.537 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.537 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.537 [18/745] Linking static target lib/librte_kvargs.a 00:01:31.537 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.537 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.537 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.537 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.537 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.537 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.537 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.537 [26/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.537 [27/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.537 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.537 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.537 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.537 [31/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.537 [32/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.537 [33/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:31.537 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.537 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.537 [36/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.537 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.537 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.537 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.537 [40/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.537 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.537 [42/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.537 [43/745] Generating lib/rte_eal_def with a custom command 00:01:31.537 [44/745] Generating lib/rte_eal_mingw with a custom command 00:01:31.537 [45/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.537 [46/745] Generating lib/rte_ring_mingw with a custom command 00:01:31.537 [47/745] Generating lib/rte_ring_def with a custom command 00:01:31.537 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.537 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.537 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.799 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.799 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.799 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.799 [54/745] Generating lib/rte_rcu_def with a custom command 00:01:31.799 [55/745] Generating lib/rte_rcu_mingw with a custom command 00:01:31.799 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.799 [57/745] Generating lib/rte_mempool_def with a custom command 00:01:31.799 [58/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.799 [59/745] Generating lib/rte_mempool_mingw with a custom command 00:01:31.799 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.799 [61/745] Generating lib/rte_mbuf_def with a custom command 00:01:31.799 [62/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:31.799 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.799 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:31.799 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.799 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.799 [67/745] Generating lib/rte_net_def with a custom command 00:01:31.799 [68/745] Generating lib/rte_net_mingw with a custom command 00:01:31.799 [69/745] Generating lib/rte_meter_def with a custom command 00:01:31.799 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.799 [71/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.799 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:31.799 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.799 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.799 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.799 [76/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.799 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.799 [78/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.799 [79/745] Generating lib/rte_ethdev_def with a custom command 00:01:31.799 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.061 [81/745] Linking target lib/librte_kvargs.so.23.0 00:01:32.061 [82/745] Linking static target lib/librte_ring.a 00:01:32.061 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:32.061 [84/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:32.061 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.061 [86/745] Generating lib/rte_pci_def with a custom command 00:01:32.061 [87/745] Linking static target lib/librte_meter.a 00:01:32.061 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.061 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:32.061 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:32.061 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:32.061 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:32.061 [93/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:32.061 [94/745] Linking static target lib/librte_pci.a 00:01:32.061 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:32.321 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:32.321 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:32.321 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:32.321 [99/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:32.321 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:32.321 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.321 [102/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.321 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:32.321 [104/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:32.321 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:32.321 [106/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:32.321 [107/745] Generating lib/rte_cmdline_def with a custom command 00:01:32.321 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:32.321 [109/745] Linking static target lib/librte_telemetry.a 00:01:32.582 [110/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:32.582 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:32.582 [112/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.582 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.582 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:32.582 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:32.582 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.582 [117/745] Generating lib/rte_hash_def with a custom command 00:01:32.582 [118/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.582 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:32.582 [120/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.582 [121/745] Generating lib/rte_timer_def with a custom command 00:01:32.582 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:32.843 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.843 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:32.843 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.844 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.844 [127/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.844 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.844 [129/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.844 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.844 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.844 [132/745] Generating lib/rte_acl_def with a custom command 00:01:32.844 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:32.844 [134/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.844 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.844 [136/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.844 [137/745] Generating lib/rte_bbdev_def with a custom command 00:01:33.106 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:01:33.106 [139/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:33.106 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:33.106 [141/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.106 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:33.107 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:33.107 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:33.107 [145/745] Linking target lib/librte_telemetry.so.23.0 00:01:33.107 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:33.107 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:33.107 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:33.107 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:33.107 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:33.107 [151/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:33.107 [152/745] Generating lib/rte_bpf_mingw with a custom command 00:01:33.107 [153/745] Generating lib/rte_cfgfile_def with a custom command 00:01:33.107 [154/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:33.107 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:33.107 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:33.371 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:33.371 [158/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:33.371 [159/745] Generating lib/rte_compressdev_def with a custom command 00:01:33.371 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:33.371 [161/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:33.371 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:33.371 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.371 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:33.371 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:33.371 [166/745] Generating lib/rte_cryptodev_def with a custom command 00:01:33.371 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:33.371 [168/745] Linking static target lib/librte_rcu.a 00:01:33.371 [169/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:33.371 [170/745] Generating lib/rte_distributor_def with a custom command 00:01:33.371 [171/745] Linking static target lib/librte_cmdline.a 00:01:33.371 [172/745] Generating lib/rte_distributor_mingw with a custom command 00:01:33.371 [173/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:33.371 [174/745] Linking static target lib/librte_timer.a 00:01:33.371 [175/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:33.371 [176/745] Generating lib/rte_efd_def with a custom command 00:01:33.371 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.371 [178/745] Generating lib/rte_efd_mingw with a custom command 00:01:33.633 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:33.633 [180/745] Linking static target lib/librte_net.a 00:01:33.633 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:33.633 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:33.633 [183/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:33.633 [184/745] Linking static target lib/librte_metrics.a 00:01:33.633 [185/745] Linking static target lib/librte_cfgfile.a 00:01:33.896 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:33.896 [187/745] Linking static target lib/librte_mempool.a 00:01:33.896 [188/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.896 [189/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.896 [190/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.896 [191/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:33.896 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:34.159 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:34.159 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:34.159 [195/745] Linking static target lib/librte_eal.a 00:01:34.159 [196/745] Generating lib/rte_eventdev_def with a custom command 00:01:34.159 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:34.159 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:34.159 [199/745] Generating lib/rte_gpudev_def with a custom command 00:01:34.159 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:34.159 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:34.159 [202/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.159 [203/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:34.159 [204/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:34.159 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:34.159 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:34.159 [207/745] Linking static target lib/librte_bitratestats.a 00:01:34.423 [208/745] Generating lib/rte_gro_mingw with a custom command 00:01:34.423 [209/745] Generating lib/rte_gro_def with a custom command 00:01:34.423 [210/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.423 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:34.423 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:34.423 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:34.423 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:34.687 [215/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:34.687 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.687 [217/745] Generating lib/rte_gso_def with a custom command 00:01:34.687 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:34.687 [219/745] Generating lib/rte_gso_mingw with a custom command 00:01:34.687 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:34.687 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:34.687 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:34.687 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:34.687 [224/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.947 [225/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:34.947 [226/745] Linking static target lib/librte_bbdev.a 00:01:34.947 [227/745] Generating lib/rte_ip_frag_def with a custom command 00:01:34.947 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:34.947 [229/745] Generating lib/rte_jobstats_def with a custom command 00:01:34.947 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:34.947 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:34.947 [232/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:34.947 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:34.947 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:34.948 [235/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.948 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.948 [237/745] Generating lib/rte_lpm_def with a custom command 00:01:34.948 [238/745] Linking static target lib/librte_compressdev.a 00:01:34.948 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:34.948 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:34.948 [241/745] Linking static target lib/librte_jobstats.a 00:01:35.211 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:35.211 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:35.211 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.471 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:35.471 [246/745] Linking static target lib/librte_distributor.a 00:01:35.471 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:35.471 [248/745] Generating lib/rte_member_def with a custom command 00:01:35.471 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:35.471 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:35.471 [251/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.471 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:35.732 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:35.732 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:35.732 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:35.732 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:35.732 [257/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:35.732 [258/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:35.732 [259/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:35.732 [260/745] Linking static target lib/librte_bpf.a 00:01:35.732 [261/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.732 [262/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.732 [263/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.732 [264/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.732 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:35.732 [266/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:35.732 [267/745] Generating lib/rte_power_def with a custom command 00:01:35.993 [268/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.993 [269/745] Generating lib/rte_power_mingw with a custom command 00:01:35.993 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:35.993 [271/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:35.993 [272/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:35.993 [273/745] Generating lib/rte_rawdev_def with a custom command 00:01:35.993 [274/745] Linking static target lib/librte_gro.a 00:01:35.993 [275/745] Linking static target lib/librte_gpudev.a 00:01:35.993 [276/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:35.993 [277/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:35.993 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:35.993 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:35.993 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.993 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:35.993 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:35.993 [283/745] Generating lib/rte_rib_mingw with a custom command 00:01:35.993 [284/745] Generating lib/rte_rib_def with a custom command 00:01:35.993 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:36.257 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:36.257 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:36.257 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:36.257 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.258 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.258 [291/745] Generating lib/rte_sched_def with a custom command 00:01:36.258 [292/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:36.258 [293/745] Generating lib/rte_sched_mingw with a custom command 00:01:36.258 [294/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:36.525 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:36.525 [296/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:36.525 [297/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:36.525 [298/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:36.525 [299/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:36.525 [300/745] Generating lib/rte_security_mingw with a custom command 00:01:36.525 [301/745] Generating lib/rte_security_def with a custom command 00:01:36.525 [302/745] Linking static target lib/librte_latencystats.a 00:01:36.525 [303/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.525 [304/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:36.525 [305/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:36.525 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:36.525 [307/745] Generating lib/rte_stack_def with a custom command 00:01:36.525 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:36.525 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:36.525 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:36.525 [311/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:36.525 [312/745] Linking static target lib/librte_rawdev.a 00:01:36.525 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:36.525 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:36.525 [315/745] Linking static target lib/librte_stack.a 00:01:36.525 [316/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:36.525 [317/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:36.525 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:36.525 [319/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:36.785 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:36.785 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:36.785 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.785 [323/745] Linking static target lib/librte_dmadev.a 00:01:36.785 [324/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.785 [325/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:36.785 [326/745] Linking static target lib/librte_ip_frag.a 00:01:36.785 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:36.785 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:37.048 [329/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:37.048 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:37.048 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.048 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:37.048 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:37.314 [334/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:37.314 [335/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.314 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.314 [337/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:37.314 [338/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.314 [339/745] Generating lib/rte_fib_def with a custom command 00:01:37.314 [340/745] Generating lib/rte_fib_mingw with a custom command 00:01:37.314 [341/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.314 [342/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:37.314 [343/745] Linking static target lib/librte_regexdev.a 00:01:37.577 [344/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:37.577 [345/745] Linking static target lib/librte_gso.a 00:01:37.577 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.577 [347/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:37.577 [348/745] Linking static target lib/librte_efd.a 00:01:37.577 [349/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:37.839 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.839 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:37.839 [352/745] Linking static target lib/librte_pcapng.a 00:01:37.839 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:37.839 [354/745] Linking static target lib/librte_lpm.a 00:01:37.839 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:37.839 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.839 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.104 [358/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:38.104 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.104 [360/745] Linking static target lib/librte_reorder.a 00:01:38.104 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.104 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.104 [363/745] Generating lib/rte_port_def with a custom command 00:01:38.104 [364/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:38.104 [365/745] Generating lib/rte_port_mingw with a custom command 00:01:38.104 [366/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.104 [367/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:38.104 [368/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:38.367 [369/745] Generating lib/rte_pdump_def with a custom command 00:01:38.367 [370/745] Generating lib/rte_pdump_mingw with a custom command 00:01:38.367 [371/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:38.368 [372/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:38.368 [373/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:38.368 [374/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.368 [375/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:38.368 [376/745] Linking static target lib/acl/libavx2_tmp.a 00:01:38.368 [377/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.368 [378/745] Linking static target lib/librte_security.a 00:01:38.368 [379/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:38.368 [380/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.368 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.368 [382/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.368 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:38.629 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.629 [385/745] Linking static target lib/librte_power.a 00:01:38.629 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:38.629 [387/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.629 [388/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:38.629 [389/745] Linking static target lib/librte_rib.a 00:01:38.629 [390/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.629 [391/745] Linking static target lib/librte_hash.a 00:01:38.629 [392/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:38.896 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:01:38.896 [394/745] Linking static target lib/librte_acl.a 00:01:38.896 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:38.896 [396/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:38.896 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:38.896 [398/745] Generating lib/rte_table_def with a custom command 00:01:38.896 [399/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.156 [400/745] Generating lib/rte_table_mingw with a custom command 00:01:39.156 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.156 [402/745] Linking static target lib/librte_ethdev.a 00:01:39.156 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.421 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.421 [405/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:39.421 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:39.421 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:39.687 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:39.687 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:39.687 [410/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:39.687 [411/745] Generating lib/rte_pipeline_def with a custom command 00:01:39.687 [412/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.687 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:39.687 [414/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.687 [415/745] Linking static target lib/librte_mbuf.a 00:01:39.687 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:39.687 [417/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:39.687 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:39.687 [419/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:39.687 [420/745] Generating lib/rte_graph_def with a custom command 00:01:39.687 [421/745] Generating lib/rte_graph_mingw with a custom command 00:01:39.687 [422/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:39.687 [423/745] Linking static target lib/librte_fib.a 00:01:39.687 [424/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:39.687 [425/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.687 [426/745] Linking static target lib/librte_eventdev.a 00:01:39.948 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:39.948 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:39.948 [429/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.948 [430/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:39.948 [431/745] Linking static target lib/librte_member.a 00:01:39.948 [432/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:39.948 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:40.214 [434/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:40.214 [435/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:40.214 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:40.214 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:40.214 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:40.214 [439/745] Generating lib/rte_node_def with a custom command 00:01:40.214 [440/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.214 [441/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.214 [442/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.476 [443/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:40.476 [444/745] Linking static target lib/librte_sched.a 00:01:40.476 [445/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:40.476 [446/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:40.476 [447/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:40.476 [448/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:40.476 [449/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.476 [450/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:40.476 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:40.476 [452/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.476 [453/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:40.476 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.476 [455/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.476 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:40.476 [457/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:40.476 [458/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.743 [459/745] Linking static target lib/librte_cryptodev.a 00:01:40.743 [460/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:40.743 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:40.743 [462/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:40.743 [463/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:40.743 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.743 [465/745] Linking static target lib/librte_pdump.a 00:01:40.743 [466/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.743 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:40.743 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:41.002 [469/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:41.002 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:41.002 [471/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:41.002 [472/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:41.002 [473/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:41.002 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:41.002 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:41.002 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.002 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:41.266 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:41.266 [479/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.266 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:41.266 [481/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:41.266 [482/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:41.266 [483/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:41.266 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:41.266 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.266 [486/745] Linking static target drivers/librte_bus_vdev.a 00:01:41.533 [487/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:41.533 [488/745] Linking static target lib/librte_table.a 00:01:41.533 [489/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:41.533 [490/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.533 [491/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:41.533 [492/745] Linking static target lib/librte_ipsec.a 00:01:41.533 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.533 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.811 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.811 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:41.811 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:42.075 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:42.075 [499/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:42.075 [500/745] Linking static target lib/librte_graph.a 00:01:42.075 [501/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:42.075 [502/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:42.075 [503/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:42.075 [504/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.075 [505/745] Linking static target drivers/librte_bus_pci.a 00:01:42.075 [506/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:42.075 [507/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.075 [508/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:42.075 [509/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.075 [510/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:42.336 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:42.336 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:42.336 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:42.599 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.599 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:42.599 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.599 [517/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.861 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:42.861 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:42.861 [520/745] Linking static target lib/librte_port.a 00:01:43.127 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:43.127 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:43.127 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:43.127 [524/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:43.127 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:43.127 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:43.386 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.386 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:43.386 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:43.652 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:43.652 [531/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:43.652 [532/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:43.652 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.652 [534/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.652 [535/745] Linking static target drivers/librte_mempool_ring.a 00:01:43.652 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:43.652 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:43.913 [538/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.913 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:43.913 [540/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:43.913 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.487 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:44.487 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:44.487 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:44.487 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:44.487 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:44.487 [547/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:44.487 [548/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:44.487 [549/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:44.753 [550/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:44.753 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:45.012 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:45.012 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:45.012 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:45.012 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:45.275 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:45.275 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:45.275 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:45.538 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:45.801 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:45.801 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:45.801 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:45.801 [563/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:45.801 [564/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:46.066 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:46.066 [566/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:46.066 [567/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:46.066 [568/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:46.066 [569/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:46.066 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:46.328 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:46.328 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:46.328 [573/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:46.328 [574/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:46.328 [575/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:46.593 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:46.593 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:46.593 [578/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:46.593 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:46.593 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:46.854 [581/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:46.854 [582/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.854 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:46.854 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:46.854 [585/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.854 [586/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:46.854 [587/745] Linking target lib/librte_eal.so.23.0 00:01:47.116 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:47.116 [589/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:47.116 [590/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:47.381 [591/745] Linking target lib/librte_ring.so.23.0 00:01:47.381 [592/745] Linking target lib/librte_meter.so.23.0 00:01:47.381 [593/745] Linking target lib/librte_pci.so.23.0 00:01:47.381 [594/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:47.381 [595/745] Linking target lib/librte_timer.so.23.0 00:01:47.381 [596/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:47.381 [597/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:47.644 [598/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:47.644 [599/745] Linking target lib/librte_rcu.so.23.0 00:01:47.644 [600/745] Linking target lib/librte_mempool.so.23.0 00:01:47.644 [601/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:47.644 [602/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:47.644 [603/745] Linking target lib/librte_acl.so.23.0 00:01:47.644 [604/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:47.644 [605/745] Linking target lib/librte_cfgfile.so.23.0 00:01:47.644 [606/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:47.644 [607/745] Linking target lib/librte_jobstats.so.23.0 00:01:47.644 [608/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:47.644 [609/745] Linking target lib/librte_stack.so.23.0 00:01:47.644 [610/745] Linking target lib/librte_rawdev.so.23.0 00:01:47.644 [611/745] Linking target lib/librte_dmadev.so.23.0 00:01:47.644 [612/745] Linking target lib/librte_graph.so.23.0 00:01:47.644 [613/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:47.644 [614/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:47.644 [615/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:47.904 [616/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:47.904 [617/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:47.904 [618/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:47.904 [619/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:47.904 [620/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:47.904 [621/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:47.904 [622/745] Linking target lib/librte_rib.so.23.0 00:01:47.904 [623/745] Linking target lib/librte_mbuf.so.23.0 00:01:47.904 [624/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:47.904 [625/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:47.904 [626/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:47.904 [627/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:47.904 [628/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:47.904 [629/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:47.904 [630/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:48.162 [631/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:48.162 [632/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:48.162 [633/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:48.162 [634/745] Linking target lib/librte_fib.so.23.0 00:01:48.162 [635/745] Linking target lib/librte_sched.so.23.0 00:01:48.162 [636/745] Linking target lib/librte_reorder.so.23.0 00:01:48.162 [637/745] Linking target lib/librte_distributor.so.23.0 00:01:48.162 [638/745] Linking target lib/librte_gpudev.so.23.0 00:01:48.162 [639/745] Linking target lib/librte_bbdev.so.23.0 00:01:48.162 [640/745] Linking target lib/librte_compressdev.so.23.0 00:01:48.162 [641/745] Linking target lib/librte_regexdev.so.23.0 00:01:48.162 [642/745] Linking target lib/librte_net.so.23.0 00:01:48.162 [643/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:48.162 [644/745] Linking target lib/librte_cryptodev.so.23.0 00:01:48.162 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:48.162 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:48.162 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:48.422 [648/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:48.422 [649/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:48.422 [650/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:48.422 [651/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:48.422 [652/745] Linking target lib/librte_cmdline.so.23.0 00:01:48.422 [653/745] Linking target lib/librte_security.so.23.0 00:01:48.422 [654/745] Linking target lib/librte_ethdev.so.23.0 00:01:48.422 [655/745] Linking target lib/librte_hash.so.23.0 00:01:48.422 [656/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:48.422 [657/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:48.422 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:48.422 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:48.422 [660/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:48.422 [661/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:48.422 [662/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:48.422 [663/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:48.422 [664/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:48.681 [665/745] Linking target lib/librte_efd.so.23.0 00:01:48.681 [666/745] Linking target lib/librte_member.so.23.0 00:01:48.681 [667/745] Linking target lib/librte_pcapng.so.23.0 00:01:48.681 [668/745] Linking target lib/librte_bpf.so.23.0 00:01:48.681 [669/745] Linking target lib/librte_ipsec.so.23.0 00:01:48.681 [670/745] Linking target lib/librte_lpm.so.23.0 00:01:48.681 [671/745] Linking target lib/librte_gso.so.23.0 00:01:48.681 [672/745] Linking target lib/librte_ip_frag.so.23.0 00:01:48.681 [673/745] Linking target lib/librte_metrics.so.23.0 00:01:48.681 [674/745] Linking target lib/librte_power.so.23.0 00:01:48.681 [675/745] Linking target lib/librte_gro.so.23.0 00:01:48.681 [676/745] Linking target lib/librte_eventdev.so.23.0 00:01:48.681 [677/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:48.681 [678/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:48.681 [679/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:48.681 [680/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:48.681 [681/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:48.681 [682/745] Linking target lib/librte_pdump.so.23.0 00:01:48.681 [683/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:48.681 [684/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:48.939 [685/745] Linking target lib/librte_latencystats.so.23.0 00:01:48.939 [686/745] Linking target lib/librte_bitratestats.so.23.0 00:01:48.939 [687/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:48.939 [688/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:48.939 [689/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:48.939 [690/745] Linking target lib/librte_port.so.23.0 00:01:48.939 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:48.939 [692/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:49.197 [693/745] Linking target lib/librte_table.so.23.0 00:01:49.197 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:49.197 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:49.455 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:49.455 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:49.713 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:49.713 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:49.971 [700/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:49.971 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:49.971 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:50.229 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:50.229 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:50.229 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.229 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.229 [707/745] Linking static target drivers/librte_net_i40e.a 00:01:50.229 [708/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:50.794 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:50.794 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:50.794 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.053 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:51.985 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:51.985 [714/745] Linking static target lib/librte_node.a 00:01:52.243 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.243 [716/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:52.243 [717/745] Linking target lib/librte_node.so.23.0 00:01:53.175 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:53.433 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:03.399 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.953 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.953 [722/745] Linking static target lib/librte_vhost.a 00:02:30.887 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.887 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:45.751 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:45.751 [726/745] Linking static target lib/librte_pipeline.a 00:02:45.751 [727/745] Linking target app/dpdk-test-fib 00:02:45.751 [728/745] Linking target app/dpdk-dumpcap 00:02:45.751 [729/745] Linking target app/dpdk-test-regex 00:02:45.751 [730/745] Linking target app/dpdk-proc-info 00:02:45.751 [731/745] Linking target app/dpdk-test-cmdline 00:02:45.751 [732/745] Linking target app/dpdk-test-acl 00:02:45.751 [733/745] Linking target app/dpdk-test-security-perf 00:02:45.751 [734/745] Linking target app/dpdk-test-bbdev 00:02:45.751 [735/745] Linking target app/dpdk-pdump 00:02:45.751 [736/745] Linking target app/dpdk-test-sad 00:02:45.751 [737/745] Linking target app/dpdk-test-flow-perf 00:02:45.751 [738/745] Linking target app/dpdk-test-gpudev 00:02:45.751 [739/745] Linking target app/dpdk-test-pipeline 00:02:45.751 [740/745] Linking target app/dpdk-test-eventdev 00:02:45.751 [741/745] Linking target app/dpdk-test-crypto-perf 00:02:45.751 [742/745] Linking target app/dpdk-test-compress-perf 00:02:45.751 [743/745] Linking target app/dpdk-testpmd 00:02:47.652 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.652 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:47.652 18:35:59 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:47.652 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:47.652 [0/1] Installing files. 00:02:47.913 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:47.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:47.917 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.917 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.917 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.917 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.917 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.917 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.917 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.918 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.918 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.918 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.918 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.176 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.177 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:48.439 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:48.439 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:48.439 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.439 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:48.439 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.440 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.441 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.442 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:48.443 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:48.443 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:48.443 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:48.443 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:48.443 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:48.443 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:48.443 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:48.443 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:48.443 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:48.443 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:48.443 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:48.443 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:48.443 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:48.443 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:48.443 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:48.443 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:48.443 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:48.443 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:48.443 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:48.443 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:48.443 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:48.443 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:48.443 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:48.444 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:48.444 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:48.444 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:48.444 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:48.444 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:48.444 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:48.444 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:48.444 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:48.444 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:48.444 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:48.444 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:48.444 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:48.444 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:48.444 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:48.444 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:48.444 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:48.444 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:48.444 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:48.444 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:48.444 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:48.444 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:48.444 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:48.444 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:48.444 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:48.444 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:48.444 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:48.444 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:48.444 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:48.444 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:48.444 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:48.444 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:48.444 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:48.444 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:48.444 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:48.444 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:48.444 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:48.444 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:48.444 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:48.444 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:48.444 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:48.444 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:48.444 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:48.444 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:48.444 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:48.444 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:48.444 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:48.444 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:48.444 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:48.444 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:48.444 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:48.444 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:48.444 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:48.444 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:48.444 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:48.444 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:48.444 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:48.444 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:48.444 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:48.444 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:48.444 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:48.444 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:48.444 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:48.444 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:48.444 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:48.444 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:48.444 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:48.444 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:48.444 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:48.444 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:48.444 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:48.444 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:48.444 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:48.444 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:48.444 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:48.444 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:48.445 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:48.445 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:48.445 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:48.445 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:48.445 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:48.445 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:48.445 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:48.445 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:48.445 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:48.445 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:48.445 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:48.445 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:48.445 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:48.445 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:48.445 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:48.445 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:48.445 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:48.445 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:48.445 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:48.445 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:48.445 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:48.445 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:48.445 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:48.704 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:48.704 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:48.704 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:48.704 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:48.704 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:48.704 18:36:00 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:02:48.704 18:36:00 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:48.704 18:36:00 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:02:48.704 18:36:00 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.704 00:02:48.704 real 1m23.073s 00:02:48.704 user 14m24.385s 00:02:48.704 sys 1m47.237s 00:02:48.704 18:36:00 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:48.704 18:36:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:48.704 ************************************ 00:02:48.704 END TEST build_native_dpdk 00:02:48.704 ************************************ 00:02:48.704 18:36:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:48.704 18:36:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:48.704 18:36:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:48.704 18:36:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:48.704 18:36:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:48.704 18:36:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:48.704 18:36:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:48.704 18:36:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:48.704 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:48.704 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.704 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.962 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:49.221 Using 'verbs' RDMA provider 00:02:59.760 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:07.871 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:08.129 Creating mk/config.mk...done. 00:03:08.129 Creating mk/cc.flags.mk...done. 00:03:08.129 Type 'make' to build. 00:03:08.129 18:36:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:08.129 18:36:19 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:08.129 18:36:19 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:08.129 18:36:19 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.129 ************************************ 00:03:08.129 START TEST make 00:03:08.129 ************************************ 00:03:08.129 18:36:19 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:08.386 make[1]: Nothing to be done for 'all'. 00:03:10.309 The Meson build system 00:03:10.309 Version: 1.3.1 00:03:10.309 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:10.309 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:10.309 Build type: native build 00:03:10.309 Project name: libvfio-user 00:03:10.309 Project version: 0.0.1 00:03:10.309 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:10.309 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:10.309 Host machine cpu family: x86_64 00:03:10.309 Host machine cpu: x86_64 00:03:10.309 Run-time dependency threads found: YES 00:03:10.309 Library dl found: YES 00:03:10.309 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:10.309 Run-time dependency json-c found: YES 0.17 00:03:10.309 Run-time dependency cmocka found: YES 1.1.7 00:03:10.309 Program pytest-3 found: NO 00:03:10.309 Program flake8 found: NO 00:03:10.309 Program misspell-fixer found: NO 00:03:10.309 Program restructuredtext-lint found: NO 00:03:10.309 Program valgrind found: YES (/usr/bin/valgrind) 00:03:10.309 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:10.309 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:10.309 Compiler for C supports arguments -Wwrite-strings: YES 00:03:10.309 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:10.309 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:10.309 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:10.309 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:10.309 Build targets in project: 8 00:03:10.309 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:10.309 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:10.309 00:03:10.309 libvfio-user 0.0.1 00:03:10.309 00:03:10.309 User defined options 00:03:10.309 buildtype : debug 00:03:10.309 default_library: shared 00:03:10.309 libdir : /usr/local/lib 00:03:10.309 00:03:10.309 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.884 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:10.884 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:10.884 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:10.884 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:10.884 [4/37] Compiling C object samples/null.p/null.c.o 00:03:10.884 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:10.884 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:10.884 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:10.884 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:11.147 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:11.147 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:11.147 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:11.147 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:11.147 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:11.147 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:11.147 [15/37] Compiling C object samples/server.p/server.c.o 00:03:11.147 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:11.147 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:11.147 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:11.147 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:11.147 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:11.147 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:11.147 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:11.147 [23/37] Compiling C object samples/client.p/client.c.o 00:03:11.147 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:11.147 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:11.147 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:11.147 [27/37] Linking target samples/client 00:03:11.147 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:11.409 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:11.409 [30/37] Linking target test/unit_tests 00:03:11.409 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:11.685 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:11.685 [33/37] Linking target samples/server 00:03:11.685 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:11.685 [35/37] Linking target samples/null 00:03:11.685 [36/37] Linking target samples/gpio-pci-idio-16 00:03:11.685 [37/37] Linking target samples/lspci 00:03:11.685 INFO: autodetecting backend as ninja 00:03:11.685 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:11.685 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:12.634 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:12.634 ninja: no work to do. 00:03:24.828 CC lib/ut/ut.o 00:03:24.828 CC lib/ut_mock/mock.o 00:03:24.828 CC lib/log/log.o 00:03:24.828 CC lib/log/log_flags.o 00:03:24.828 CC lib/log/log_deprecated.o 00:03:24.828 LIB libspdk_ut.a 00:03:24.828 LIB libspdk_log.a 00:03:24.828 LIB libspdk_ut_mock.a 00:03:24.828 SO libspdk_ut.so.2.0 00:03:24.828 SO libspdk_ut_mock.so.6.0 00:03:24.828 SO libspdk_log.so.7.0 00:03:24.828 SYMLINK libspdk_ut.so 00:03:24.828 SYMLINK libspdk_ut_mock.so 00:03:24.828 SYMLINK libspdk_log.so 00:03:24.828 CXX lib/trace_parser/trace.o 00:03:24.828 CC lib/dma/dma.o 00:03:24.828 CC lib/ioat/ioat.o 00:03:24.828 CC lib/util/base64.o 00:03:24.828 CC lib/util/bit_array.o 00:03:24.828 CC lib/util/cpuset.o 00:03:24.828 CC lib/util/crc16.o 00:03:24.828 CC lib/util/crc32.o 00:03:24.828 CC lib/util/crc32c.o 00:03:24.828 CC lib/util/crc32_ieee.o 00:03:24.828 CC lib/util/crc64.o 00:03:24.828 CC lib/util/dif.o 00:03:24.828 CC lib/util/fd.o 00:03:24.828 CC lib/util/file.o 00:03:24.828 CC lib/util/hexlify.o 00:03:24.828 CC lib/util/iov.o 00:03:24.828 CC lib/util/math.o 00:03:24.828 CC lib/util/pipe.o 00:03:24.828 CC lib/util/strerror_tls.o 00:03:24.828 CC lib/util/string.o 00:03:24.828 CC lib/util/uuid.o 00:03:24.828 CC lib/util/fd_group.o 00:03:24.828 CC lib/util/xor.o 00:03:24.828 CC lib/util/zipf.o 00:03:24.828 CC lib/vfio_user/host/vfio_user_pci.o 00:03:24.828 CC lib/vfio_user/host/vfio_user.o 00:03:24.828 LIB libspdk_dma.a 00:03:24.828 SO libspdk_dma.so.4.0 00:03:24.828 SYMLINK libspdk_dma.so 00:03:24.828 LIB libspdk_ioat.a 00:03:24.828 SO libspdk_ioat.so.7.0 00:03:24.828 SYMLINK libspdk_ioat.so 00:03:24.828 LIB libspdk_vfio_user.a 00:03:24.828 SO libspdk_vfio_user.so.5.0 00:03:24.828 SYMLINK libspdk_vfio_user.so 00:03:24.828 LIB libspdk_util.a 00:03:24.828 SO libspdk_util.so.9.0 00:03:24.828 SYMLINK libspdk_util.so 00:03:24.828 LIB libspdk_trace_parser.a 00:03:24.828 CC lib/json/json_parse.o 00:03:24.828 CC lib/rdma/common.o 00:03:24.828 CC lib/conf/conf.o 00:03:24.828 CC lib/env_dpdk/env.o 00:03:24.828 CC lib/json/json_util.o 00:03:24.828 CC lib/idxd/idxd.o 00:03:24.828 CC lib/rdma/rdma_verbs.o 00:03:24.828 CC lib/vmd/vmd.o 00:03:24.828 CC lib/env_dpdk/memory.o 00:03:24.828 CC lib/json/json_write.o 00:03:24.828 CC lib/vmd/led.o 00:03:24.828 CC lib/idxd/idxd_user.o 00:03:24.828 CC lib/env_dpdk/pci.o 00:03:24.828 CC lib/idxd/idxd_kernel.o 00:03:24.828 CC lib/env_dpdk/init.o 00:03:24.828 CC lib/env_dpdk/threads.o 00:03:24.828 CC lib/env_dpdk/pci_ioat.o 00:03:24.828 CC lib/env_dpdk/pci_virtio.o 00:03:24.828 CC lib/env_dpdk/pci_vmd.o 00:03:24.828 CC lib/env_dpdk/pci_idxd.o 00:03:24.828 CC lib/env_dpdk/sigbus_handler.o 00:03:24.828 CC lib/env_dpdk/pci_event.o 00:03:24.828 CC lib/env_dpdk/pci_dpdk.o 00:03:24.828 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:24.828 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:24.828 SO libspdk_trace_parser.so.5.0 00:03:24.828 SYMLINK libspdk_trace_parser.so 00:03:25.086 LIB libspdk_conf.a 00:03:25.086 SO libspdk_conf.so.6.0 00:03:25.086 LIB libspdk_json.a 00:03:25.086 LIB libspdk_rdma.a 00:03:25.086 SYMLINK libspdk_conf.so 00:03:25.086 SO libspdk_json.so.6.0 00:03:25.086 SO libspdk_rdma.so.6.0 00:03:25.342 SYMLINK libspdk_json.so 00:03:25.342 SYMLINK libspdk_rdma.so 00:03:25.342 CC lib/jsonrpc/jsonrpc_server.o 00:03:25.342 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:25.343 CC lib/jsonrpc/jsonrpc_client.o 00:03:25.343 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.343 LIB libspdk_idxd.a 00:03:25.343 SO libspdk_idxd.so.12.0 00:03:25.343 LIB libspdk_vmd.a 00:03:25.600 SYMLINK libspdk_idxd.so 00:03:25.600 SO libspdk_vmd.so.6.0 00:03:25.600 SYMLINK libspdk_vmd.so 00:03:25.600 LIB libspdk_jsonrpc.a 00:03:25.600 SO libspdk_jsonrpc.so.6.0 00:03:25.600 SYMLINK libspdk_jsonrpc.so 00:03:25.857 CC lib/rpc/rpc.o 00:03:26.115 LIB libspdk_rpc.a 00:03:26.115 SO libspdk_rpc.so.6.0 00:03:26.115 SYMLINK libspdk_rpc.so 00:03:26.373 CC lib/trace/trace.o 00:03:26.373 CC lib/trace/trace_flags.o 00:03:26.373 CC lib/keyring/keyring.o 00:03:26.373 CC lib/trace/trace_rpc.o 00:03:26.373 CC lib/notify/notify.o 00:03:26.373 CC lib/keyring/keyring_rpc.o 00:03:26.373 CC lib/notify/notify_rpc.o 00:03:26.631 LIB libspdk_notify.a 00:03:26.631 SO libspdk_notify.so.6.0 00:03:26.631 LIB libspdk_keyring.a 00:03:26.631 SYMLINK libspdk_notify.so 00:03:26.631 LIB libspdk_trace.a 00:03:26.631 SO libspdk_keyring.so.1.0 00:03:26.631 SO libspdk_trace.so.10.0 00:03:26.631 SYMLINK libspdk_keyring.so 00:03:26.631 SYMLINK libspdk_trace.so 00:03:26.889 LIB libspdk_env_dpdk.a 00:03:26.889 CC lib/thread/thread.o 00:03:26.889 CC lib/thread/iobuf.o 00:03:26.889 SO libspdk_env_dpdk.so.14.0 00:03:26.889 CC lib/sock/sock.o 00:03:26.889 CC lib/sock/sock_rpc.o 00:03:27.148 SYMLINK libspdk_env_dpdk.so 00:03:27.148 LIB libspdk_sock.a 00:03:27.407 SO libspdk_sock.so.9.0 00:03:27.407 SYMLINK libspdk_sock.so 00:03:27.407 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:27.407 CC lib/nvme/nvme_ctrlr.o 00:03:27.407 CC lib/nvme/nvme_fabric.o 00:03:27.407 CC lib/nvme/nvme_ns_cmd.o 00:03:27.407 CC lib/nvme/nvme_ns.o 00:03:27.407 CC lib/nvme/nvme_pcie_common.o 00:03:27.407 CC lib/nvme/nvme_pcie.o 00:03:27.407 CC lib/nvme/nvme_qpair.o 00:03:27.407 CC lib/nvme/nvme.o 00:03:27.407 CC lib/nvme/nvme_quirks.o 00:03:27.407 CC lib/nvme/nvme_transport.o 00:03:27.407 CC lib/nvme/nvme_discovery.o 00:03:27.407 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:27.407 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:27.407 CC lib/nvme/nvme_tcp.o 00:03:27.407 CC lib/nvme/nvme_opal.o 00:03:27.407 CC lib/nvme/nvme_io_msg.o 00:03:27.407 CC lib/nvme/nvme_poll_group.o 00:03:27.407 CC lib/nvme/nvme_zns.o 00:03:27.407 CC lib/nvme/nvme_stubs.o 00:03:27.407 CC lib/nvme/nvme_auth.o 00:03:27.407 CC lib/nvme/nvme_cuse.o 00:03:27.407 CC lib/nvme/nvme_vfio_user.o 00:03:27.407 CC lib/nvme/nvme_rdma.o 00:03:28.341 LIB libspdk_thread.a 00:03:28.341 SO libspdk_thread.so.10.0 00:03:28.599 SYMLINK libspdk_thread.so 00:03:28.599 CC lib/vfu_tgt/tgt_endpoint.o 00:03:28.599 CC lib/init/json_config.o 00:03:28.599 CC lib/vfu_tgt/tgt_rpc.o 00:03:28.599 CC lib/blob/blobstore.o 00:03:28.599 CC lib/init/subsystem.o 00:03:28.599 CC lib/blob/request.o 00:03:28.599 CC lib/init/subsystem_rpc.o 00:03:28.599 CC lib/blob/zeroes.o 00:03:28.599 CC lib/blob/blob_bs_dev.o 00:03:28.599 CC lib/init/rpc.o 00:03:28.599 CC lib/accel/accel.o 00:03:28.599 CC lib/accel/accel_rpc.o 00:03:28.599 CC lib/virtio/virtio.o 00:03:28.599 CC lib/accel/accel_sw.o 00:03:28.599 CC lib/virtio/virtio_vhost_user.o 00:03:28.599 CC lib/virtio/virtio_vfio_user.o 00:03:28.599 CC lib/virtio/virtio_pci.o 00:03:28.857 LIB libspdk_init.a 00:03:28.857 SO libspdk_init.so.5.0 00:03:29.115 LIB libspdk_virtio.a 00:03:29.115 LIB libspdk_vfu_tgt.a 00:03:29.115 SYMLINK libspdk_init.so 00:03:29.115 SO libspdk_vfu_tgt.so.3.0 00:03:29.115 SO libspdk_virtio.so.7.0 00:03:29.115 SYMLINK libspdk_vfu_tgt.so 00:03:29.115 SYMLINK libspdk_virtio.so 00:03:29.115 CC lib/event/app.o 00:03:29.115 CC lib/event/reactor.o 00:03:29.115 CC lib/event/log_rpc.o 00:03:29.115 CC lib/event/app_rpc.o 00:03:29.115 CC lib/event/scheduler_static.o 00:03:29.681 LIB libspdk_event.a 00:03:29.681 SO libspdk_event.so.13.0 00:03:29.681 SYMLINK libspdk_event.so 00:03:29.681 LIB libspdk_accel.a 00:03:29.681 SO libspdk_accel.so.15.0 00:03:29.939 SYMLINK libspdk_accel.so 00:03:29.939 LIB libspdk_nvme.a 00:03:29.939 SO libspdk_nvme.so.13.0 00:03:29.939 CC lib/bdev/bdev.o 00:03:29.939 CC lib/bdev/bdev_rpc.o 00:03:29.939 CC lib/bdev/bdev_zone.o 00:03:29.939 CC lib/bdev/part.o 00:03:29.939 CC lib/bdev/scsi_nvme.o 00:03:30.214 SYMLINK libspdk_nvme.so 00:03:31.605 LIB libspdk_blob.a 00:03:31.605 SO libspdk_blob.so.11.0 00:03:31.866 SYMLINK libspdk_blob.so 00:03:31.866 CC lib/lvol/lvol.o 00:03:31.866 CC lib/blobfs/blobfs.o 00:03:31.866 CC lib/blobfs/tree.o 00:03:32.431 LIB libspdk_bdev.a 00:03:32.689 SO libspdk_bdev.so.15.0 00:03:32.689 SYMLINK libspdk_bdev.so 00:03:32.689 LIB libspdk_blobfs.a 00:03:32.689 SO libspdk_blobfs.so.10.0 00:03:32.956 CC lib/nvmf/ctrlr.o 00:03:32.956 CC lib/nbd/nbd.o 00:03:32.956 CC lib/scsi/dev.o 00:03:32.956 CC lib/ublk/ublk.o 00:03:32.956 CC lib/nvmf/ctrlr_discovery.o 00:03:32.956 CC lib/nbd/nbd_rpc.o 00:03:32.956 CC lib/scsi/lun.o 00:03:32.956 CC lib/ublk/ublk_rpc.o 00:03:32.956 CC lib/nvmf/ctrlr_bdev.o 00:03:32.956 CC lib/ftl/ftl_core.o 00:03:32.956 CC lib/scsi/port.o 00:03:32.956 CC lib/nvmf/subsystem.o 00:03:32.956 CC lib/ftl/ftl_init.o 00:03:32.957 CC lib/nvmf/nvmf.o 00:03:32.957 CC lib/scsi/scsi.o 00:03:32.957 CC lib/ftl/ftl_layout.o 00:03:32.957 CC lib/nvmf/nvmf_rpc.o 00:03:32.957 CC lib/scsi/scsi_bdev.o 00:03:32.957 CC lib/ftl/ftl_debug.o 00:03:32.957 CC lib/scsi/scsi_pr.o 00:03:32.957 CC lib/nvmf/transport.o 00:03:32.957 CC lib/ftl/ftl_io.o 00:03:32.957 CC lib/ftl/ftl_sb.o 00:03:32.957 CC lib/scsi/scsi_rpc.o 00:03:32.957 CC lib/nvmf/tcp.o 00:03:32.957 CC lib/nvmf/stubs.o 00:03:32.957 CC lib/scsi/task.o 00:03:32.957 CC lib/ftl/ftl_l2p.o 00:03:32.957 CC lib/nvmf/mdns_server.o 00:03:32.957 CC lib/ftl/ftl_l2p_flat.o 00:03:32.957 CC lib/nvmf/vfio_user.o 00:03:32.957 CC lib/ftl/ftl_nv_cache.o 00:03:32.957 CC lib/ftl/ftl_band.o 00:03:32.957 CC lib/nvmf/rdma.o 00:03:32.957 CC lib/ftl/ftl_band_ops.o 00:03:32.957 CC lib/nvmf/auth.o 00:03:32.957 CC lib/ftl/ftl_writer.o 00:03:32.957 CC lib/ftl/ftl_rq.o 00:03:32.957 CC lib/ftl/ftl_reloc.o 00:03:32.957 CC lib/ftl/ftl_l2p_cache.o 00:03:32.957 CC lib/ftl/ftl_p2l.o 00:03:32.957 CC lib/ftl/mngt/ftl_mngt.o 00:03:32.957 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:32.957 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:32.957 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:32.957 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:32.957 SYMLINK libspdk_blobfs.so 00:03:32.957 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:32.957 LIB libspdk_lvol.a 00:03:32.957 SO libspdk_lvol.so.10.0 00:03:32.957 SYMLINK libspdk_lvol.so 00:03:32.957 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:33.222 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:33.222 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:33.222 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:33.222 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:33.222 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:33.222 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:33.222 CC lib/ftl/utils/ftl_conf.o 00:03:33.222 CC lib/ftl/utils/ftl_md.o 00:03:33.222 CC lib/ftl/utils/ftl_mempool.o 00:03:33.222 CC lib/ftl/utils/ftl_bitmap.o 00:03:33.222 CC lib/ftl/utils/ftl_property.o 00:03:33.222 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:33.222 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:33.222 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:33.222 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:33.222 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:33.481 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:33.481 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:33.481 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:33.481 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:33.481 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:33.481 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:33.481 CC lib/ftl/base/ftl_base_dev.o 00:03:33.481 CC lib/ftl/base/ftl_base_bdev.o 00:03:33.481 CC lib/ftl/ftl_trace.o 00:03:33.740 LIB libspdk_nbd.a 00:03:33.740 SO libspdk_nbd.so.7.0 00:03:33.740 SYMLINK libspdk_nbd.so 00:03:33.740 LIB libspdk_scsi.a 00:03:33.740 SO libspdk_scsi.so.9.0 00:03:33.998 SYMLINK libspdk_scsi.so 00:03:33.998 LIB libspdk_ublk.a 00:03:33.998 SO libspdk_ublk.so.3.0 00:03:33.998 SYMLINK libspdk_ublk.so 00:03:33.998 CC lib/vhost/vhost.o 00:03:33.998 CC lib/iscsi/conn.o 00:03:33.998 CC lib/vhost/vhost_rpc.o 00:03:33.998 CC lib/iscsi/init_grp.o 00:03:33.998 CC lib/vhost/vhost_scsi.o 00:03:33.998 CC lib/iscsi/iscsi.o 00:03:33.998 CC lib/vhost/vhost_blk.o 00:03:33.998 CC lib/iscsi/md5.o 00:03:33.998 CC lib/vhost/rte_vhost_user.o 00:03:33.998 CC lib/iscsi/param.o 00:03:33.998 CC lib/iscsi/portal_grp.o 00:03:33.998 CC lib/iscsi/tgt_node.o 00:03:33.998 CC lib/iscsi/iscsi_subsystem.o 00:03:33.998 CC lib/iscsi/iscsi_rpc.o 00:03:33.998 CC lib/iscsi/task.o 00:03:34.256 LIB libspdk_ftl.a 00:03:34.515 SO libspdk_ftl.so.9.0 00:03:34.773 SYMLINK libspdk_ftl.so 00:03:35.338 LIB libspdk_vhost.a 00:03:35.338 SO libspdk_vhost.so.8.0 00:03:35.338 LIB libspdk_nvmf.a 00:03:35.338 SYMLINK libspdk_vhost.so 00:03:35.596 SO libspdk_nvmf.so.18.0 00:03:35.596 LIB libspdk_iscsi.a 00:03:35.596 SO libspdk_iscsi.so.8.0 00:03:35.596 SYMLINK libspdk_nvmf.so 00:03:35.596 SYMLINK libspdk_iscsi.so 00:03:35.854 CC module/env_dpdk/env_dpdk_rpc.o 00:03:35.854 CC module/vfu_device/vfu_virtio.o 00:03:35.854 CC module/vfu_device/vfu_virtio_blk.o 00:03:35.854 CC module/vfu_device/vfu_virtio_scsi.o 00:03:35.854 CC module/vfu_device/vfu_virtio_rpc.o 00:03:36.112 CC module/accel/ioat/accel_ioat.o 00:03:36.112 CC module/blob/bdev/blob_bdev.o 00:03:36.112 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:36.112 CC module/accel/ioat/accel_ioat_rpc.o 00:03:36.112 CC module/accel/iaa/accel_iaa.o 00:03:36.112 CC module/accel/dsa/accel_dsa.o 00:03:36.112 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.112 CC module/sock/posix/posix.o 00:03:36.112 CC module/accel/dsa/accel_dsa_rpc.o 00:03:36.112 CC module/keyring/linux/keyring.o 00:03:36.112 CC module/accel/error/accel_error.o 00:03:36.112 CC module/keyring/linux/keyring_rpc.o 00:03:36.112 CC module/scheduler/gscheduler/gscheduler.o 00:03:36.112 CC module/accel/error/accel_error_rpc.o 00:03:36.112 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:36.112 CC module/keyring/file/keyring.o 00:03:36.112 CC module/keyring/file/keyring_rpc.o 00:03:36.112 LIB libspdk_env_dpdk_rpc.a 00:03:36.112 SO libspdk_env_dpdk_rpc.so.6.0 00:03:36.112 SYMLINK libspdk_env_dpdk_rpc.so 00:03:36.112 LIB libspdk_keyring_linux.a 00:03:36.112 LIB libspdk_scheduler_gscheduler.a 00:03:36.112 LIB libspdk_keyring_file.a 00:03:36.112 LIB libspdk_scheduler_dpdk_governor.a 00:03:36.370 SO libspdk_scheduler_gscheduler.so.4.0 00:03:36.370 SO libspdk_keyring_linux.so.1.0 00:03:36.370 SO libspdk_keyring_file.so.1.0 00:03:36.370 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:36.370 LIB libspdk_accel_error.a 00:03:36.370 LIB libspdk_accel_ioat.a 00:03:36.370 LIB libspdk_scheduler_dynamic.a 00:03:36.370 LIB libspdk_accel_iaa.a 00:03:36.370 SO libspdk_accel_error.so.2.0 00:03:36.370 SO libspdk_accel_ioat.so.6.0 00:03:36.370 SYMLINK libspdk_scheduler_gscheduler.so 00:03:36.370 SYMLINK libspdk_keyring_file.so 00:03:36.370 SYMLINK libspdk_keyring_linux.so 00:03:36.370 SO libspdk_scheduler_dynamic.so.4.0 00:03:36.370 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:36.370 SO libspdk_accel_iaa.so.3.0 00:03:36.370 SYMLINK libspdk_accel_error.so 00:03:36.370 SYMLINK libspdk_accel_ioat.so 00:03:36.370 LIB libspdk_accel_dsa.a 00:03:36.370 LIB libspdk_blob_bdev.a 00:03:36.370 SYMLINK libspdk_scheduler_dynamic.so 00:03:36.370 SYMLINK libspdk_accel_iaa.so 00:03:36.370 SO libspdk_accel_dsa.so.5.0 00:03:36.370 SO libspdk_blob_bdev.so.11.0 00:03:36.370 SYMLINK libspdk_accel_dsa.so 00:03:36.370 SYMLINK libspdk_blob_bdev.so 00:03:36.630 LIB libspdk_vfu_device.a 00:03:36.630 SO libspdk_vfu_device.so.3.0 00:03:36.630 CC module/bdev/raid/bdev_raid.o 00:03:36.630 CC module/blobfs/bdev/blobfs_bdev.o 00:03:36.630 CC module/bdev/raid/bdev_raid_rpc.o 00:03:36.630 CC module/bdev/lvol/vbdev_lvol.o 00:03:36.630 CC module/bdev/null/bdev_null.o 00:03:36.630 CC module/bdev/error/vbdev_error.o 00:03:36.630 CC module/bdev/gpt/gpt.o 00:03:36.630 CC module/bdev/raid/bdev_raid_sb.o 00:03:36.630 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:36.630 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:36.630 CC module/bdev/nvme/bdev_nvme.o 00:03:36.630 CC module/bdev/gpt/vbdev_gpt.o 00:03:36.630 CC module/bdev/error/vbdev_error_rpc.o 00:03:36.630 CC module/bdev/raid/raid0.o 00:03:36.630 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:36.630 CC module/bdev/null/bdev_null_rpc.o 00:03:36.630 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:36.630 CC module/bdev/raid/raid1.o 00:03:36.630 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:36.630 CC module/bdev/raid/concat.o 00:03:36.630 CC module/bdev/aio/bdev_aio.o 00:03:36.630 CC module/bdev/delay/vbdev_delay.o 00:03:36.630 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:36.630 CC module/bdev/ftl/bdev_ftl.o 00:03:36.630 CC module/bdev/nvme/nvme_rpc.o 00:03:36.630 CC module/bdev/aio/bdev_aio_rpc.o 00:03:36.630 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:36.630 CC module/bdev/malloc/bdev_malloc.o 00:03:36.630 CC module/bdev/passthru/vbdev_passthru.o 00:03:36.630 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:36.630 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:36.630 CC module/bdev/nvme/bdev_mdns_client.o 00:03:36.630 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:36.630 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:36.630 CC module/bdev/nvme/vbdev_opal.o 00:03:36.630 CC module/bdev/iscsi/bdev_iscsi.o 00:03:36.630 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:36.630 CC module/bdev/split/vbdev_split.o 00:03:36.630 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:36.630 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:36.630 CC module/bdev/split/vbdev_split_rpc.o 00:03:36.630 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:36.888 SYMLINK libspdk_vfu_device.so 00:03:36.888 LIB libspdk_sock_posix.a 00:03:36.888 SO libspdk_sock_posix.so.6.0 00:03:37.147 LIB libspdk_blobfs_bdev.a 00:03:37.147 SO libspdk_blobfs_bdev.so.6.0 00:03:37.147 SYMLINK libspdk_sock_posix.so 00:03:37.147 LIB libspdk_bdev_split.a 00:03:37.147 SO libspdk_bdev_split.so.6.0 00:03:37.147 SYMLINK libspdk_blobfs_bdev.so 00:03:37.147 LIB libspdk_bdev_passthru.a 00:03:37.147 LIB libspdk_bdev_null.a 00:03:37.147 SYMLINK libspdk_bdev_split.so 00:03:37.147 LIB libspdk_bdev_gpt.a 00:03:37.147 LIB libspdk_bdev_error.a 00:03:37.147 LIB libspdk_bdev_aio.a 00:03:37.147 SO libspdk_bdev_passthru.so.6.0 00:03:37.147 SO libspdk_bdev_null.so.6.0 00:03:37.147 SO libspdk_bdev_gpt.so.6.0 00:03:37.147 SO libspdk_bdev_error.so.6.0 00:03:37.147 SO libspdk_bdev_aio.so.6.0 00:03:37.147 LIB libspdk_bdev_iscsi.a 00:03:37.147 LIB libspdk_bdev_ftl.a 00:03:37.147 LIB libspdk_bdev_zone_block.a 00:03:37.147 LIB libspdk_bdev_delay.a 00:03:37.147 SO libspdk_bdev_iscsi.so.6.0 00:03:37.147 SYMLINK libspdk_bdev_passthru.so 00:03:37.147 SO libspdk_bdev_ftl.so.6.0 00:03:37.147 SO libspdk_bdev_zone_block.so.6.0 00:03:37.147 SYMLINK libspdk_bdev_null.so 00:03:37.147 LIB libspdk_bdev_malloc.a 00:03:37.405 SYMLINK libspdk_bdev_error.so 00:03:37.405 SYMLINK libspdk_bdev_gpt.so 00:03:37.405 SYMLINK libspdk_bdev_aio.so 00:03:37.405 LIB libspdk_bdev_virtio.a 00:03:37.405 SO libspdk_bdev_delay.so.6.0 00:03:37.405 SO libspdk_bdev_malloc.so.6.0 00:03:37.405 SYMLINK libspdk_bdev_ftl.so 00:03:37.405 SYMLINK libspdk_bdev_iscsi.so 00:03:37.405 SO libspdk_bdev_virtio.so.6.0 00:03:37.405 SYMLINK libspdk_bdev_zone_block.so 00:03:37.405 SYMLINK libspdk_bdev_delay.so 00:03:37.405 SYMLINK libspdk_bdev_malloc.so 00:03:37.405 SYMLINK libspdk_bdev_virtio.so 00:03:37.405 LIB libspdk_bdev_lvol.a 00:03:37.405 SO libspdk_bdev_lvol.so.6.0 00:03:37.405 SYMLINK libspdk_bdev_lvol.so 00:03:37.971 LIB libspdk_bdev_raid.a 00:03:37.971 SO libspdk_bdev_raid.so.6.0 00:03:37.971 SYMLINK libspdk_bdev_raid.so 00:03:39.343 LIB libspdk_bdev_nvme.a 00:03:39.343 SO libspdk_bdev_nvme.so.7.0 00:03:39.343 SYMLINK libspdk_bdev_nvme.so 00:03:39.602 CC module/event/subsystems/vmd/vmd.o 00:03:39.602 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:39.602 CC module/event/subsystems/scheduler/scheduler.o 00:03:39.602 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:39.602 CC module/event/subsystems/keyring/keyring.o 00:03:39.602 CC module/event/subsystems/iobuf/iobuf.o 00:03:39.602 CC module/event/subsystems/sock/sock.o 00:03:39.602 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:39.602 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:39.602 LIB libspdk_event_keyring.a 00:03:39.602 LIB libspdk_event_vhost_blk.a 00:03:39.602 LIB libspdk_event_sock.a 00:03:39.602 LIB libspdk_event_scheduler.a 00:03:39.602 LIB libspdk_event_vmd.a 00:03:39.602 LIB libspdk_event_vfu_tgt.a 00:03:39.861 LIB libspdk_event_iobuf.a 00:03:39.861 SO libspdk_event_keyring.so.1.0 00:03:39.861 SO libspdk_event_vhost_blk.so.3.0 00:03:39.861 SO libspdk_event_scheduler.so.4.0 00:03:39.861 SO libspdk_event_sock.so.5.0 00:03:39.861 SO libspdk_event_vfu_tgt.so.3.0 00:03:39.861 SO libspdk_event_vmd.so.6.0 00:03:39.861 SO libspdk_event_iobuf.so.3.0 00:03:39.861 SYMLINK libspdk_event_keyring.so 00:03:39.861 SYMLINK libspdk_event_vhost_blk.so 00:03:39.861 SYMLINK libspdk_event_scheduler.so 00:03:39.861 SYMLINK libspdk_event_sock.so 00:03:39.861 SYMLINK libspdk_event_vfu_tgt.so 00:03:39.861 SYMLINK libspdk_event_vmd.so 00:03:39.861 SYMLINK libspdk_event_iobuf.so 00:03:40.119 CC module/event/subsystems/accel/accel.o 00:03:40.119 LIB libspdk_event_accel.a 00:03:40.119 SO libspdk_event_accel.so.6.0 00:03:40.119 SYMLINK libspdk_event_accel.so 00:03:40.377 CC module/event/subsystems/bdev/bdev.o 00:03:40.635 LIB libspdk_event_bdev.a 00:03:40.635 SO libspdk_event_bdev.so.6.0 00:03:40.635 SYMLINK libspdk_event_bdev.so 00:03:40.893 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:40.893 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:40.893 CC module/event/subsystems/scsi/scsi.o 00:03:40.893 CC module/event/subsystems/nbd/nbd.o 00:03:40.893 CC module/event/subsystems/ublk/ublk.o 00:03:40.893 LIB libspdk_event_ublk.a 00:03:40.893 LIB libspdk_event_nbd.a 00:03:40.893 LIB libspdk_event_scsi.a 00:03:40.893 SO libspdk_event_nbd.so.6.0 00:03:40.893 SO libspdk_event_ublk.so.3.0 00:03:40.893 SO libspdk_event_scsi.so.6.0 00:03:41.151 SYMLINK libspdk_event_ublk.so 00:03:41.151 SYMLINK libspdk_event_nbd.so 00:03:41.151 SYMLINK libspdk_event_scsi.so 00:03:41.151 LIB libspdk_event_nvmf.a 00:03:41.151 SO libspdk_event_nvmf.so.6.0 00:03:41.151 SYMLINK libspdk_event_nvmf.so 00:03:41.151 CC module/event/subsystems/iscsi/iscsi.o 00:03:41.151 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:41.409 LIB libspdk_event_vhost_scsi.a 00:03:41.409 LIB libspdk_event_iscsi.a 00:03:41.409 SO libspdk_event_vhost_scsi.so.3.0 00:03:41.409 SO libspdk_event_iscsi.so.6.0 00:03:41.409 SYMLINK libspdk_event_vhost_scsi.so 00:03:41.409 SYMLINK libspdk_event_iscsi.so 00:03:41.675 SO libspdk.so.6.0 00:03:41.675 SYMLINK libspdk.so 00:03:41.675 CXX app/trace/trace.o 00:03:41.675 CC app/spdk_nvme_identify/identify.o 00:03:41.675 CC app/spdk_top/spdk_top.o 00:03:41.675 TEST_HEADER include/spdk/accel.h 00:03:41.675 CC app/spdk_lspci/spdk_lspci.o 00:03:41.675 CC app/trace_record/trace_record.o 00:03:41.675 TEST_HEADER include/spdk/accel_module.h 00:03:41.675 CC app/spdk_nvme_perf/perf.o 00:03:41.675 TEST_HEADER include/spdk/assert.h 00:03:41.675 CC test/rpc_client/rpc_client_test.o 00:03:41.675 TEST_HEADER include/spdk/barrier.h 00:03:41.675 CC app/spdk_nvme_discover/discovery_aer.o 00:03:41.675 TEST_HEADER include/spdk/base64.h 00:03:41.941 TEST_HEADER include/spdk/bdev.h 00:03:41.941 TEST_HEADER include/spdk/bdev_module.h 00:03:41.941 TEST_HEADER include/spdk/bdev_zone.h 00:03:41.941 TEST_HEADER include/spdk/bit_array.h 00:03:41.941 TEST_HEADER include/spdk/bit_pool.h 00:03:41.941 TEST_HEADER include/spdk/blob_bdev.h 00:03:41.941 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:41.941 TEST_HEADER include/spdk/blobfs.h 00:03:41.941 TEST_HEADER include/spdk/blob.h 00:03:41.941 TEST_HEADER include/spdk/conf.h 00:03:41.941 TEST_HEADER include/spdk/config.h 00:03:41.941 TEST_HEADER include/spdk/cpuset.h 00:03:41.941 TEST_HEADER include/spdk/crc16.h 00:03:41.941 CC app/spdk_dd/spdk_dd.o 00:03:41.941 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:41.941 TEST_HEADER include/spdk/crc32.h 00:03:41.941 TEST_HEADER include/spdk/crc64.h 00:03:41.941 TEST_HEADER include/spdk/dif.h 00:03:41.941 CC app/nvmf_tgt/nvmf_main.o 00:03:41.941 TEST_HEADER include/spdk/dma.h 00:03:41.941 TEST_HEADER include/spdk/endian.h 00:03:41.941 CC app/iscsi_tgt/iscsi_tgt.o 00:03:41.941 TEST_HEADER include/spdk/env_dpdk.h 00:03:41.941 TEST_HEADER include/spdk/env.h 00:03:41.941 TEST_HEADER include/spdk/event.h 00:03:41.941 CC app/vhost/vhost.o 00:03:41.941 TEST_HEADER include/spdk/fd_group.h 00:03:41.941 TEST_HEADER include/spdk/fd.h 00:03:41.941 TEST_HEADER include/spdk/file.h 00:03:41.941 TEST_HEADER include/spdk/ftl.h 00:03:41.941 TEST_HEADER include/spdk/gpt_spec.h 00:03:41.941 TEST_HEADER include/spdk/hexlify.h 00:03:41.941 TEST_HEADER include/spdk/histogram_data.h 00:03:41.941 TEST_HEADER include/spdk/idxd.h 00:03:41.941 CC examples/ioat/perf/perf.o 00:03:41.941 CC app/fio/nvme/fio_plugin.o 00:03:41.941 CC examples/ioat/verify/verify.o 00:03:41.941 TEST_HEADER include/spdk/idxd_spec.h 00:03:41.941 CC test/app/histogram_perf/histogram_perf.o 00:03:41.941 CC examples/util/zipf/zipf.o 00:03:41.941 CC examples/sock/hello_world/hello_sock.o 00:03:41.941 CC examples/idxd/perf/perf.o 00:03:41.941 TEST_HEADER include/spdk/init.h 00:03:41.941 CC examples/nvme/reconnect/reconnect.o 00:03:41.941 CC test/app/stub/stub.o 00:03:41.941 CC examples/nvme/hello_world/hello_world.o 00:03:41.941 TEST_HEADER include/spdk/ioat.h 00:03:41.941 CC examples/vmd/lsvmd/lsvmd.o 00:03:41.941 CC examples/accel/perf/accel_perf.o 00:03:41.941 TEST_HEADER include/spdk/ioat_spec.h 00:03:41.941 CC app/spdk_tgt/spdk_tgt.o 00:03:41.941 CC test/event/event_perf/event_perf.o 00:03:41.941 TEST_HEADER include/spdk/iscsi_spec.h 00:03:41.941 CC test/thread/poller_perf/poller_perf.o 00:03:41.941 TEST_HEADER include/spdk/json.h 00:03:41.941 CC examples/vmd/led/led.o 00:03:41.941 CC test/env/vtophys/vtophys.o 00:03:41.941 TEST_HEADER include/spdk/jsonrpc.h 00:03:41.941 TEST_HEADER include/spdk/keyring.h 00:03:41.941 TEST_HEADER include/spdk/keyring_module.h 00:03:41.941 CC test/nvme/aer/aer.o 00:03:41.941 CC test/app/jsoncat/jsoncat.o 00:03:41.941 TEST_HEADER include/spdk/likely.h 00:03:41.941 TEST_HEADER include/spdk/log.h 00:03:41.941 TEST_HEADER include/spdk/lvol.h 00:03:41.941 TEST_HEADER include/spdk/memory.h 00:03:41.941 TEST_HEADER include/spdk/mmio.h 00:03:41.941 TEST_HEADER include/spdk/nbd.h 00:03:41.941 TEST_HEADER include/spdk/notify.h 00:03:41.941 TEST_HEADER include/spdk/nvme.h 00:03:41.941 TEST_HEADER include/spdk/nvme_intel.h 00:03:41.941 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:41.941 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:41.941 CC test/bdev/bdevio/bdevio.o 00:03:41.941 TEST_HEADER include/spdk/nvme_spec.h 00:03:41.941 CC examples/bdev/hello_world/hello_bdev.o 00:03:41.941 CC test/dma/test_dma/test_dma.o 00:03:41.941 TEST_HEADER include/spdk/nvme_zns.h 00:03:41.941 CC app/fio/bdev/fio_plugin.o 00:03:41.941 CC test/blobfs/mkfs/mkfs.o 00:03:41.941 CC examples/blob/cli/blobcli.o 00:03:41.941 CC test/accel/dif/dif.o 00:03:41.941 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:41.941 CC examples/nvmf/nvmf/nvmf.o 00:03:41.941 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.941 CC examples/blob/hello_world/hello_blob.o 00:03:41.941 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:41.941 CC test/app/bdev_svc/bdev_svc.o 00:03:41.941 TEST_HEADER include/spdk/nvmf.h 00:03:41.941 TEST_HEADER include/spdk/nvmf_spec.h 00:03:41.941 TEST_HEADER include/spdk/nvmf_transport.h 00:03:41.941 CC examples/thread/thread/thread_ex.o 00:03:41.941 TEST_HEADER include/spdk/opal.h 00:03:41.941 TEST_HEADER include/spdk/opal_spec.h 00:03:41.941 TEST_HEADER include/spdk/pci_ids.h 00:03:41.941 TEST_HEADER include/spdk/pipe.h 00:03:41.941 TEST_HEADER include/spdk/queue.h 00:03:41.941 TEST_HEADER include/spdk/reduce.h 00:03:42.205 TEST_HEADER include/spdk/rpc.h 00:03:42.205 TEST_HEADER include/spdk/scheduler.h 00:03:42.205 TEST_HEADER include/spdk/scsi.h 00:03:42.205 TEST_HEADER include/spdk/scsi_spec.h 00:03:42.205 TEST_HEADER include/spdk/sock.h 00:03:42.205 LINK spdk_lspci 00:03:42.205 TEST_HEADER include/spdk/stdinc.h 00:03:42.205 TEST_HEADER include/spdk/string.h 00:03:42.205 TEST_HEADER include/spdk/thread.h 00:03:42.205 TEST_HEADER include/spdk/trace.h 00:03:42.205 TEST_HEADER include/spdk/trace_parser.h 00:03:42.205 TEST_HEADER include/spdk/tree.h 00:03:42.205 CC test/env/mem_callbacks/mem_callbacks.o 00:03:42.205 TEST_HEADER include/spdk/ublk.h 00:03:42.205 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:42.205 TEST_HEADER include/spdk/util.h 00:03:42.205 TEST_HEADER include/spdk/uuid.h 00:03:42.205 TEST_HEADER include/spdk/version.h 00:03:42.205 CC test/lvol/esnap/esnap.o 00:03:42.205 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:42.205 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:42.205 TEST_HEADER include/spdk/vhost.h 00:03:42.205 TEST_HEADER include/spdk/vmd.h 00:03:42.205 TEST_HEADER include/spdk/xor.h 00:03:42.205 TEST_HEADER include/spdk/zipf.h 00:03:42.205 CXX test/cpp_headers/accel.o 00:03:42.205 LINK rpc_client_test 00:03:42.205 LINK spdk_nvme_discover 00:03:42.205 LINK lsvmd 00:03:42.205 LINK interrupt_tgt 00:03:42.205 LINK zipf 00:03:42.205 LINK histogram_perf 00:03:42.205 LINK poller_perf 00:03:42.205 LINK vtophys 00:03:42.205 LINK nvmf_tgt 00:03:42.205 LINK jsoncat 00:03:42.205 LINK led 00:03:42.205 LINK vhost 00:03:42.205 LINK event_perf 00:03:42.205 LINK stub 00:03:42.205 LINK spdk_trace_record 00:03:42.205 LINK iscsi_tgt 00:03:42.468 LINK ioat_perf 00:03:42.468 LINK spdk_tgt 00:03:42.468 LINK verify 00:03:42.468 LINK hello_world 00:03:42.468 LINK bdev_svc 00:03:42.468 LINK hello_sock 00:03:42.468 LINK mkfs 00:03:42.468 CXX test/cpp_headers/accel_module.o 00:03:42.468 LINK hello_bdev 00:03:42.468 LINK aer 00:03:42.468 LINK mem_callbacks 00:03:42.468 LINK hello_blob 00:03:42.468 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:42.468 LINK spdk_dd 00:03:42.468 LINK thread 00:03:42.468 LINK idxd_perf 00:03:42.731 LINK reconnect 00:03:42.731 LINK nvmf 00:03:42.731 LINK spdk_trace 00:03:42.731 CXX test/cpp_headers/assert.o 00:03:42.731 CC test/event/reactor/reactor.o 00:03:42.731 CC test/nvme/reset/reset.o 00:03:42.731 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:42.731 CC test/env/memory/memory_ut.o 00:03:42.731 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:42.731 CXX test/cpp_headers/barrier.o 00:03:42.731 CC test/event/reactor_perf/reactor_perf.o 00:03:42.731 CC test/env/pci/pci_ut.o 00:03:42.731 CXX test/cpp_headers/base64.o 00:03:42.732 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:42.732 LINK test_dma 00:03:42.732 CC test/nvme/sgl/sgl.o 00:03:42.732 LINK bdevio 00:03:42.732 CC examples/nvme/arbitration/arbitration.o 00:03:42.732 CC test/event/app_repeat/app_repeat.o 00:03:42.732 CC examples/nvme/hotplug/hotplug.o 00:03:42.732 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:42.732 CXX test/cpp_headers/bdev.o 00:03:42.995 CC examples/nvme/abort/abort.o 00:03:42.995 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.995 LINK env_dpdk_post_init 00:03:42.995 CC test/nvme/e2edp/nvme_dp.o 00:03:42.995 LINK dif 00:03:42.995 LINK nvme_fuzz 00:03:42.995 LINK accel_perf 00:03:42.995 CC test/nvme/overhead/overhead.o 00:03:42.995 CXX test/cpp_headers/bdev_module.o 00:03:42.995 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:42.995 CXX test/cpp_headers/bdev_zone.o 00:03:42.995 CC test/event/scheduler/scheduler.o 00:03:42.995 LINK reactor 00:03:42.995 LINK spdk_nvme 00:03:42.995 LINK spdk_bdev 00:03:42.995 CXX test/cpp_headers/bit_array.o 00:03:42.995 CC test/nvme/err_injection/err_injection.o 00:03:42.995 CC test/nvme/startup/startup.o 00:03:42.995 LINK blobcli 00:03:42.995 CXX test/cpp_headers/bit_pool.o 00:03:42.995 CC test/nvme/reserve/reserve.o 00:03:42.995 CC test/nvme/simple_copy/simple_copy.o 00:03:42.995 CXX test/cpp_headers/blob_bdev.o 00:03:42.995 LINK reactor_perf 00:03:42.995 CC test/nvme/connect_stress/connect_stress.o 00:03:42.995 LINK app_repeat 00:03:42.995 CXX test/cpp_headers/blobfs_bdev.o 00:03:43.255 CXX test/cpp_headers/blobfs.o 00:03:43.255 CXX test/cpp_headers/blob.o 00:03:43.255 CC test/nvme/boot_partition/boot_partition.o 00:03:43.255 CXX test/cpp_headers/conf.o 00:03:43.255 LINK cmb_copy 00:03:43.256 CXX test/cpp_headers/config.o 00:03:43.256 CC test/nvme/compliance/nvme_compliance.o 00:03:43.256 CC test/nvme/fused_ordering/fused_ordering.o 00:03:43.256 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:43.256 LINK reset 00:03:43.256 CXX test/cpp_headers/cpuset.o 00:03:43.256 CXX test/cpp_headers/crc16.o 00:03:43.256 LINK pmr_persistence 00:03:43.256 CXX test/cpp_headers/crc32.o 00:03:43.256 CXX test/cpp_headers/crc64.o 00:03:43.256 CXX test/cpp_headers/dif.o 00:03:43.256 LINK sgl 00:03:43.256 LINK hotplug 00:03:43.256 CXX test/cpp_headers/dma.o 00:03:43.256 CXX test/cpp_headers/endian.o 00:03:43.256 CXX test/cpp_headers/env_dpdk.o 00:03:43.256 LINK spdk_nvme_perf 00:03:43.256 LINK startup 00:03:43.256 CXX test/cpp_headers/env.o 00:03:43.520 CC test/nvme/fdp/fdp.o 00:03:43.520 CXX test/cpp_headers/event.o 00:03:43.520 CC test/nvme/cuse/cuse.o 00:03:43.520 LINK err_injection 00:03:43.520 CXX test/cpp_headers/fd_group.o 00:03:43.520 LINK scheduler 00:03:43.520 LINK arbitration 00:03:43.520 CXX test/cpp_headers/fd.o 00:03:43.520 LINK connect_stress 00:03:43.520 LINK spdk_nvme_identify 00:03:43.520 CXX test/cpp_headers/file.o 00:03:43.520 LINK reserve 00:03:43.520 LINK nvme_dp 00:03:43.520 CXX test/cpp_headers/ftl.o 00:03:43.520 LINK bdevperf 00:03:43.520 LINK overhead 00:03:43.520 LINK pci_ut 00:03:43.520 LINK spdk_top 00:03:43.520 LINK simple_copy 00:03:43.520 LINK boot_partition 00:03:43.520 CXX test/cpp_headers/gpt_spec.o 00:03:43.520 CXX test/cpp_headers/hexlify.o 00:03:43.520 CXX test/cpp_headers/histogram_data.o 00:03:43.520 CXX test/cpp_headers/idxd.o 00:03:43.520 CXX test/cpp_headers/idxd_spec.o 00:03:43.520 LINK abort 00:03:43.520 CXX test/cpp_headers/init.o 00:03:43.520 CXX test/cpp_headers/ioat.o 00:03:43.520 CXX test/cpp_headers/ioat_spec.o 00:03:43.520 CXX test/cpp_headers/iscsi_spec.o 00:03:43.520 CXX test/cpp_headers/json.o 00:03:43.520 CXX test/cpp_headers/jsonrpc.o 00:03:43.785 CXX test/cpp_headers/keyring.o 00:03:43.785 LINK doorbell_aers 00:03:43.785 CXX test/cpp_headers/keyring_module.o 00:03:43.785 LINK fused_ordering 00:03:43.785 LINK nvme_manage 00:03:43.785 LINK vhost_fuzz 00:03:43.786 CXX test/cpp_headers/likely.o 00:03:43.786 CXX test/cpp_headers/log.o 00:03:43.786 CXX test/cpp_headers/lvol.o 00:03:43.786 CXX test/cpp_headers/memory.o 00:03:43.786 CXX test/cpp_headers/mmio.o 00:03:43.786 CXX test/cpp_headers/nbd.o 00:03:43.786 CXX test/cpp_headers/notify.o 00:03:43.786 CXX test/cpp_headers/nvme.o 00:03:43.786 CXX test/cpp_headers/nvme_intel.o 00:03:43.786 CXX test/cpp_headers/nvme_ocssd.o 00:03:43.786 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:43.786 CXX test/cpp_headers/nvme_spec.o 00:03:43.786 CXX test/cpp_headers/nvme_zns.o 00:03:43.786 CXX test/cpp_headers/nvmf_cmd.o 00:03:43.786 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:43.786 CXX test/cpp_headers/nvmf.o 00:03:43.786 CXX test/cpp_headers/nvmf_spec.o 00:03:43.786 CXX test/cpp_headers/nvmf_transport.o 00:03:43.786 CXX test/cpp_headers/opal.o 00:03:43.786 CXX test/cpp_headers/opal_spec.o 00:03:43.786 CXX test/cpp_headers/pci_ids.o 00:03:43.786 CXX test/cpp_headers/pipe.o 00:03:43.786 LINK nvme_compliance 00:03:44.045 CXX test/cpp_headers/queue.o 00:03:44.045 CXX test/cpp_headers/reduce.o 00:03:44.045 CXX test/cpp_headers/rpc.o 00:03:44.045 CXX test/cpp_headers/scheduler.o 00:03:44.045 CXX test/cpp_headers/scsi.o 00:03:44.045 CXX test/cpp_headers/scsi_spec.o 00:03:44.045 CXX test/cpp_headers/sock.o 00:03:44.045 CXX test/cpp_headers/stdinc.o 00:03:44.045 CXX test/cpp_headers/string.o 00:03:44.045 CXX test/cpp_headers/thread.o 00:03:44.045 CXX test/cpp_headers/trace.o 00:03:44.045 CXX test/cpp_headers/trace_parser.o 00:03:44.045 CXX test/cpp_headers/tree.o 00:03:44.045 CXX test/cpp_headers/ublk.o 00:03:44.045 CXX test/cpp_headers/util.o 00:03:44.045 CXX test/cpp_headers/uuid.o 00:03:44.045 CXX test/cpp_headers/version.o 00:03:44.045 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.045 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.045 CXX test/cpp_headers/vhost.o 00:03:44.045 CXX test/cpp_headers/vmd.o 00:03:44.045 CXX test/cpp_headers/xor.o 00:03:44.045 LINK fdp 00:03:44.045 CXX test/cpp_headers/zipf.o 00:03:44.045 LINK memory_ut 00:03:45.419 LINK iscsi_fuzz 00:03:45.419 LINK cuse 00:03:47.981 LINK esnap 00:03:48.240 00:03:48.240 real 0m40.099s 00:03:48.240 user 7m33.601s 00:03:48.240 sys 1m48.380s 00:03:48.240 18:37:00 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:48.240 18:37:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.240 ************************************ 00:03:48.240 END TEST make 00:03:48.240 ************************************ 00:03:48.240 18:37:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:48.240 18:37:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:48.240 18:37:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:48.240 18:37:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.240 18:37:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:48.240 18:37:00 -- pm/common@44 -- $ pid=3295717 00:03:48.240 18:37:00 -- pm/common@50 -- $ kill -TERM 3295717 00:03:48.240 18:37:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.240 18:37:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:48.240 18:37:00 -- pm/common@44 -- $ pid=3295719 00:03:48.240 18:37:00 -- pm/common@50 -- $ kill -TERM 3295719 00:03:48.240 18:37:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.240 18:37:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:48.240 18:37:00 -- pm/common@44 -- $ pid=3295721 00:03:48.240 18:37:00 -- pm/common@50 -- $ kill -TERM 3295721 00:03:48.240 18:37:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.240 18:37:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:48.240 18:37:00 -- pm/common@44 -- $ pid=3295749 00:03:48.240 18:37:00 -- pm/common@50 -- $ sudo -E kill -TERM 3295749 00:03:48.499 18:37:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:48.499 18:37:00 -- nvmf/common.sh@7 -- # uname -s 00:03:48.499 18:37:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.499 18:37:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.499 18:37:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.499 18:37:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.499 18:37:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.499 18:37:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.499 18:37:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.499 18:37:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.499 18:37:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.499 18:37:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.499 18:37:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:48.499 18:37:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:48.499 18:37:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.499 18:37:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.499 18:37:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:48.499 18:37:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.499 18:37:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:48.499 18:37:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.499 18:37:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.499 18:37:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.499 18:37:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.499 18:37:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.499 18:37:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.499 18:37:00 -- paths/export.sh@5 -- # export PATH 00:03:48.499 18:37:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.499 18:37:00 -- nvmf/common.sh@47 -- # : 0 00:03:48.499 18:37:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:48.499 18:37:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:48.499 18:37:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.499 18:37:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.499 18:37:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.499 18:37:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:48.499 18:37:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:48.499 18:37:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:48.499 18:37:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:48.499 18:37:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:48.499 18:37:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:48.499 18:37:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:48.499 18:37:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:48.499 18:37:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:48.499 18:37:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:48.499 18:37:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:48.499 18:37:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:48.499 18:37:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:48.499 18:37:00 -- spdk/autotest.sh@48 -- # udevadm_pid=3371396 00:03:48.499 18:37:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:48.499 18:37:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:48.499 18:37:00 -- pm/common@17 -- # local monitor 00:03:48.500 18:37:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.500 18:37:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.500 18:37:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.500 18:37:00 -- pm/common@21 -- # date +%s 00:03:48.500 18:37:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.500 18:37:00 -- pm/common@21 -- # date +%s 00:03:48.500 18:37:00 -- pm/common@25 -- # sleep 1 00:03:48.500 18:37:00 -- pm/common@21 -- # date +%s 00:03:48.500 18:37:00 -- pm/common@21 -- # date +%s 00:03:48.500 18:37:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721925420 00:03:48.500 18:37:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721925420 00:03:48.500 18:37:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721925420 00:03:48.500 18:37:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721925420 00:03:48.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721925420_collect-vmstat.pm.log 00:03:48.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721925420_collect-cpu-load.pm.log 00:03:48.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721925420_collect-cpu-temp.pm.log 00:03:48.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721925420_collect-bmc-pm.bmc.pm.log 00:03:49.434 18:37:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:49.434 18:37:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:49.434 18:37:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:49.434 18:37:01 -- common/autotest_common.sh@10 -- # set +x 00:03:49.434 18:37:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:49.434 18:37:01 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:49.434 18:37:01 -- common/autotest_common.sh@10 -- # set +x 00:03:49.434 18:37:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:49.434 18:37:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:49.434 18:37:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:49.434 18:37:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:49.434 18:37:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:49.434 18:37:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:49.434 18:37:01 -- common/autotest_common.sh@1451 -- # uname 00:03:49.434 18:37:01 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:49.434 18:37:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:49.434 18:37:01 -- common/autotest_common.sh@1471 -- # uname 00:03:49.434 18:37:01 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:49.434 18:37:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:49.434 18:37:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:49.434 18:37:01 -- spdk/autotest.sh@72 -- # hash lcov 00:03:49.434 18:37:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:49.434 18:37:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:49.434 --rc lcov_branch_coverage=1 00:03:49.434 --rc lcov_function_coverage=1 00:03:49.434 --rc genhtml_branch_coverage=1 00:03:49.434 --rc genhtml_function_coverage=1 00:03:49.434 --rc genhtml_legend=1 00:03:49.434 --rc geninfo_all_blocks=1 00:03:49.434 ' 00:03:49.434 18:37:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:49.434 --rc lcov_branch_coverage=1 00:03:49.434 --rc lcov_function_coverage=1 00:03:49.434 --rc genhtml_branch_coverage=1 00:03:49.434 --rc genhtml_function_coverage=1 00:03:49.434 --rc genhtml_legend=1 00:03:49.434 --rc geninfo_all_blocks=1 00:03:49.434 ' 00:03:49.434 18:37:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:49.434 --rc lcov_branch_coverage=1 00:03:49.434 --rc lcov_function_coverage=1 00:03:49.434 --rc genhtml_branch_coverage=1 00:03:49.434 --rc genhtml_function_coverage=1 00:03:49.434 --rc genhtml_legend=1 00:03:49.434 --rc geninfo_all_blocks=1 00:03:49.434 --no-external' 00:03:49.434 18:37:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:49.434 --rc lcov_branch_coverage=1 00:03:49.434 --rc lcov_function_coverage=1 00:03:49.434 --rc genhtml_branch_coverage=1 00:03:49.434 --rc genhtml_function_coverage=1 00:03:49.434 --rc genhtml_legend=1 00:03:49.434 --rc geninfo_all_blocks=1 00:03:49.434 --no-external' 00:03:49.434 18:37:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:49.434 lcov: LCOV version 1.14 00:03:49.434 18:37:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:04.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:19.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:19.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:19.187 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:19.188 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:22.464 18:37:33 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:22.464 18:37:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:22.464 18:37:33 -- common/autotest_common.sh@10 -- # set +x 00:04:22.464 18:37:33 -- spdk/autotest.sh@91 -- # rm -f 00:04:22.464 18:37:33 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.399 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:23.399 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:23.399 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:23.399 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:23.399 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:23.399 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:23.399 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:23.399 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:23.399 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:23.399 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:23.399 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:23.399 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:23.399 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:23.399 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:23.399 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:23.399 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:23.399 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:23.399 18:37:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:23.399 18:37:35 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:23.399 18:37:35 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:23.399 18:37:35 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:23.399 18:37:35 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:23.399 18:37:35 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:23.399 18:37:35 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:23.399 18:37:35 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.399 18:37:35 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:23.399 18:37:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:23.399 18:37:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.399 18:37:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.399 18:37:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:23.399 18:37:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:23.399 18:37:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:23.657 No valid GPT data, bailing 00:04:23.657 18:37:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.657 18:37:35 -- scripts/common.sh@391 -- # pt= 00:04:23.657 18:37:35 -- scripts/common.sh@392 -- # return 1 00:04:23.657 18:37:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:23.657 1+0 records in 00:04:23.657 1+0 records out 00:04:23.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00249834 s, 420 MB/s 00:04:23.657 18:37:35 -- spdk/autotest.sh@118 -- # sync 00:04:23.657 18:37:35 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:23.657 18:37:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:23.657 18:37:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.559 18:37:37 -- spdk/autotest.sh@124 -- # uname -s 00:04:25.559 18:37:37 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:25.559 18:37:37 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:25.559 18:37:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.559 18:37:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.559 18:37:37 -- common/autotest_common.sh@10 -- # set +x 00:04:25.559 ************************************ 00:04:25.559 START TEST setup.sh 00:04:25.559 ************************************ 00:04:25.559 18:37:37 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:25.559 * Looking for test storage... 00:04:25.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.559 18:37:37 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:25.559 18:37:37 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:25.559 18:37:37 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:25.559 18:37:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.559 18:37:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.559 18:37:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.559 ************************************ 00:04:25.559 START TEST acl 00:04:25.559 ************************************ 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:25.559 * Looking for test storage... 00:04:25.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.559 18:37:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.559 18:37:37 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:25.559 18:37:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:25.559 18:37:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:25.559 18:37:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:25.559 18:37:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:25.559 18:37:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:25.559 18:37:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.559 18:37:37 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.937 18:37:38 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:26.937 18:37:38 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:26.937 18:37:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:26.937 18:37:38 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:26.937 18:37:38 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.937 18:37:38 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:27.883 Hugepages 00:04:27.883 node hugesize free / total 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 00:04:27.883 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:27.883 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.143 18:37:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:28.143 18:37:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:28.143 18:37:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:28.143 18:37:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:28.143 18:37:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:28.143 18:37:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.143 18:37:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:28.143 18:37:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:28.143 18:37:39 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.143 18:37:39 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.143 18:37:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:28.143 ************************************ 00:04:28.143 START TEST denied 00:04:28.143 ************************************ 00:04:28.143 18:37:39 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:28.143 18:37:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:28.143 18:37:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:28.143 18:37:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:28.143 18:37:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.143 18:37:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.520 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.520 18:37:41 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.055 00:04:32.055 real 0m3.878s 00:04:32.055 user 0m1.156s 00:04:32.055 sys 0m1.793s 00:04:32.055 18:37:43 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.055 18:37:43 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:32.055 ************************************ 00:04:32.055 END TEST denied 00:04:32.055 ************************************ 00:04:32.055 18:37:43 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:32.055 18:37:43 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.055 18:37:43 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.055 18:37:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:32.055 ************************************ 00:04:32.055 START TEST allowed 00:04:32.055 ************************************ 00:04:32.055 18:37:43 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:32.055 18:37:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:32.055 18:37:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:32.055 18:37:43 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:32.055 18:37:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.055 18:37:43 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.590 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.590 18:37:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:34.590 18:37:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:34.590 18:37:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:34.590 18:37:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.590 18:37:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.971 00:04:35.971 real 0m3.750s 00:04:35.971 user 0m0.921s 00:04:35.972 sys 0m1.624s 00:04:35.972 18:37:47 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.972 18:37:47 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:35.972 ************************************ 00:04:35.972 END TEST allowed 00:04:35.972 ************************************ 00:04:35.972 00:04:35.972 real 0m10.357s 00:04:35.972 user 0m3.205s 00:04:35.972 sys 0m5.085s 00:04:35.972 18:37:47 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.972 18:37:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:35.972 ************************************ 00:04:35.972 END TEST acl 00:04:35.972 ************************************ 00:04:35.972 18:37:47 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:35.972 18:37:47 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.972 18:37:47 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.972 18:37:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.972 ************************************ 00:04:35.972 START TEST hugepages 00:04:35.972 ************************************ 00:04:35.972 18:37:47 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:35.972 * Looking for test storage... 00:04:35.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 41843784 kB' 'MemAvailable: 45326028 kB' 'Buffers: 2704 kB' 'Cached: 12179796 kB' 'SwapCached: 0 kB' 'Active: 9160516 kB' 'Inactive: 3491728 kB' 'Active(anon): 8772748 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472932 kB' 'Mapped: 192840 kB' 'Shmem: 8303004 kB' 'KReclaimable: 193176 kB' 'Slab: 553160 kB' 'SReclaimable: 193176 kB' 'SUnreclaim: 359984 kB' 'KernelStack: 12768 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 9862492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195808 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.972 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:35.973 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:35.974 18:37:47 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:35.974 18:37:47 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.974 18:37:47 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.974 18:37:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.974 ************************************ 00:04:35.974 START TEST default_setup 00:04:35.974 ************************************ 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.974 18:37:47 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.352 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:37.352 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:37.352 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:37.352 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.352 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.352 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.352 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.352 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:37.352 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:37.352 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:37.352 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:37.352 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.352 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.352 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.352 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.352 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:38.297 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43954660 kB' 'MemAvailable: 47436904 kB' 'Buffers: 2704 kB' 'Cached: 12179884 kB' 'SwapCached: 0 kB' 'Active: 9178928 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791160 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491436 kB' 'Mapped: 192948 kB' 'Shmem: 8303092 kB' 'KReclaimable: 193176 kB' 'Slab: 552372 kB' 'SReclaimable: 193176 kB' 'SUnreclaim: 359196 kB' 'KernelStack: 12720 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.297 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43954660 kB' 'MemAvailable: 47436904 kB' 'Buffers: 2704 kB' 'Cached: 12179888 kB' 'SwapCached: 0 kB' 'Active: 9179028 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791260 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491520 kB' 'Mapped: 192936 kB' 'Shmem: 8303096 kB' 'KReclaimable: 193176 kB' 'Slab: 552364 kB' 'SReclaimable: 193176 kB' 'SUnreclaim: 359188 kB' 'KernelStack: 12688 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.298 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.299 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43954424 kB' 'MemAvailable: 47436708 kB' 'Buffers: 2704 kB' 'Cached: 12179904 kB' 'SwapCached: 0 kB' 'Active: 9178944 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791176 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491440 kB' 'Mapped: 192860 kB' 'Shmem: 8303112 kB' 'KReclaimable: 193256 kB' 'Slab: 552456 kB' 'SReclaimable: 193256 kB' 'SUnreclaim: 359200 kB' 'KernelStack: 12720 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.300 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.301 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.302 nr_hugepages=1024 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.302 resv_hugepages=0 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.302 surplus_hugepages=0 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.302 anon_hugepages=0 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43954740 kB' 'MemAvailable: 47437024 kB' 'Buffers: 2704 kB' 'Cached: 12179928 kB' 'SwapCached: 0 kB' 'Active: 9178940 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791172 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491404 kB' 'Mapped: 192860 kB' 'Shmem: 8303136 kB' 'KReclaimable: 193256 kB' 'Slab: 552456 kB' 'SReclaimable: 193256 kB' 'SUnreclaim: 359200 kB' 'KernelStack: 12704 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.302 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.303 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20623296 kB' 'MemUsed: 12253644 kB' 'SwapCached: 0 kB' 'Active: 5874140 kB' 'Inactive: 3354276 kB' 'Active(anon): 5605856 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354276 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9107872 kB' 'Mapped: 94332 kB' 'AnonPages: 123696 kB' 'Shmem: 5485312 kB' 'KernelStack: 7432 kB' 'PageTables: 3500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91088 kB' 'Slab: 293936 kB' 'SReclaimable: 91088 kB' 'SUnreclaim: 202848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.304 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.305 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.306 node0=1024 expecting 1024 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.306 00:04:38.306 real 0m2.460s 00:04:38.306 user 0m0.705s 00:04:38.306 sys 0m0.911s 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.306 18:37:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:38.306 ************************************ 00:04:38.306 END TEST default_setup 00:04:38.306 ************************************ 00:04:38.565 18:37:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:38.565 18:37:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.565 18:37:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.565 18:37:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.565 ************************************ 00:04:38.565 START TEST per_node_1G_alloc 00:04:38.565 ************************************ 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.565 18:37:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.501 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.501 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.501 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.501 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.501 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.501 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.501 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.501 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.501 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.501 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.501 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.501 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.501 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.501 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.501 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.501 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.501 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43973568 kB' 'MemAvailable: 47455836 kB' 'Buffers: 2704 kB' 'Cached: 12180004 kB' 'SwapCached: 0 kB' 'Active: 9179148 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791380 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491316 kB' 'Mapped: 192908 kB' 'Shmem: 8303212 kB' 'KReclaimable: 193224 kB' 'Slab: 552264 kB' 'SReclaimable: 193224 kB' 'SUnreclaim: 359040 kB' 'KernelStack: 12704 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.765 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.766 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43973996 kB' 'MemAvailable: 47456264 kB' 'Buffers: 2704 kB' 'Cached: 12180008 kB' 'SwapCached: 0 kB' 'Active: 9179408 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791640 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491588 kB' 'Mapped: 192896 kB' 'Shmem: 8303216 kB' 'KReclaimable: 193224 kB' 'Slab: 552264 kB' 'SReclaimable: 193224 kB' 'SUnreclaim: 359040 kB' 'KernelStack: 12736 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.767 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.768 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43975048 kB' 'MemAvailable: 47457316 kB' 'Buffers: 2704 kB' 'Cached: 12180024 kB' 'SwapCached: 0 kB' 'Active: 9179392 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791624 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491568 kB' 'Mapped: 192896 kB' 'Shmem: 8303232 kB' 'KReclaimable: 193224 kB' 'Slab: 552308 kB' 'SReclaimable: 193224 kB' 'SUnreclaim: 359084 kB' 'KernelStack: 12736 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.769 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.770 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.771 nr_hugepages=1024 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.771 resv_hugepages=0 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.771 surplus_hugepages=0 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.771 anon_hugepages=0 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43975448 kB' 'MemAvailable: 47457716 kB' 'Buffers: 2704 kB' 'Cached: 12180048 kB' 'SwapCached: 0 kB' 'Active: 9179432 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791664 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491568 kB' 'Mapped: 192896 kB' 'Shmem: 8303256 kB' 'KReclaimable: 193224 kB' 'Slab: 552308 kB' 'SReclaimable: 193224 kB' 'SUnreclaim: 359084 kB' 'KernelStack: 12736 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.771 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.772 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21689468 kB' 'MemUsed: 11187472 kB' 'SwapCached: 0 kB' 'Active: 5874136 kB' 'Inactive: 3354276 kB' 'Active(anon): 5605852 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354276 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9107880 kB' 'Mapped: 94344 kB' 'AnonPages: 123612 kB' 'Shmem: 5485320 kB' 'KernelStack: 7432 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91064 kB' 'Slab: 293852 kB' 'SReclaimable: 91064 kB' 'SUnreclaim: 202788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.773 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.774 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22285896 kB' 'MemUsed: 5378876 kB' 'SwapCached: 0 kB' 'Active: 3305344 kB' 'Inactive: 137452 kB' 'Active(anon): 3185860 kB' 'Inactive(anon): 0 kB' 'Active(file): 119484 kB' 'Inactive(file): 137452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3074916 kB' 'Mapped: 98552 kB' 'AnonPages: 367952 kB' 'Shmem: 2817980 kB' 'KernelStack: 5304 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102160 kB' 'Slab: 258456 kB' 'SReclaimable: 102160 kB' 'SUnreclaim: 156296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.775 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.776 node0=512 expecting 512 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:39.776 node1=512 expecting 512 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.776 00:04:39.776 real 0m1.405s 00:04:39.776 user 0m0.600s 00:04:39.776 sys 0m0.761s 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.776 18:37:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.776 ************************************ 00:04:39.776 END TEST per_node_1G_alloc 00:04:39.776 ************************************ 00:04:40.036 18:37:51 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:40.036 18:37:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.036 18:37:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.036 18:37:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.036 ************************************ 00:04:40.036 START TEST even_2G_alloc 00:04:40.036 ************************************ 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.036 18:37:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.971 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.971 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.971 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.971 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.971 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.971 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.971 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.971 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.971 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.971 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.971 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.971 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.971 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.971 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.971 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.971 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.971 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43980292 kB' 'MemAvailable: 47462560 kB' 'Buffers: 2704 kB' 'Cached: 12180144 kB' 'SwapCached: 0 kB' 'Active: 9184068 kB' 'Inactive: 3491728 kB' 'Active(anon): 8796300 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496144 kB' 'Mapped: 193376 kB' 'Shmem: 8303352 kB' 'KReclaimable: 193224 kB' 'Slab: 552340 kB' 'SReclaimable: 193224 kB' 'SUnreclaim: 359116 kB' 'KernelStack: 12720 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9888884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.234 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.235 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43979288 kB' 'MemAvailable: 47461556 kB' 'Buffers: 2704 kB' 'Cached: 12180148 kB' 'SwapCached: 0 kB' 'Active: 9179764 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791996 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491792 kB' 'Mapped: 193364 kB' 'Shmem: 8303356 kB' 'KReclaimable: 193224 kB' 'Slab: 552356 kB' 'SReclaimable: 193224 kB' 'SUnreclaim: 359132 kB' 'KernelStack: 12768 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9883984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.236 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.237 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43978572 kB' 'MemAvailable: 47460840 kB' 'Buffers: 2704 kB' 'Cached: 12180164 kB' 'SwapCached: 0 kB' 'Active: 9179552 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791784 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491668 kB' 'Mapped: 192884 kB' 'Shmem: 8303372 kB' 'KReclaimable: 193224 kB' 'Slab: 552440 kB' 'SReclaimable: 193224 kB' 'SUnreclaim: 359216 kB' 'KernelStack: 12768 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9884004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.238 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.239 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.240 nr_hugepages=1024 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.240 resv_hugepages=0 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.240 surplus_hugepages=0 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.240 anon_hugepages=0 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43978572 kB' 'MemAvailable: 47460840 kB' 'Buffers: 2704 kB' 'Cached: 12180188 kB' 'SwapCached: 0 kB' 'Active: 9179532 kB' 'Inactive: 3491728 kB' 'Active(anon): 8791764 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491632 kB' 'Mapped: 192884 kB' 'Shmem: 8303396 kB' 'KReclaimable: 193224 kB' 'Slab: 552440 kB' 'SReclaimable: 193224 kB' 'SUnreclaim: 359216 kB' 'KernelStack: 12752 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9884028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.240 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.241 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21685828 kB' 'MemUsed: 11191112 kB' 'SwapCached: 0 kB' 'Active: 5873668 kB' 'Inactive: 3354276 kB' 'Active(anon): 5605384 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354276 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9107880 kB' 'Mapped: 94352 kB' 'AnonPages: 123208 kB' 'Shmem: 5485320 kB' 'KernelStack: 7400 kB' 'PageTables: 3412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91064 kB' 'Slab: 293956 kB' 'SReclaimable: 91064 kB' 'SUnreclaim: 202892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.242 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22292744 kB' 'MemUsed: 5372028 kB' 'SwapCached: 0 kB' 'Active: 3305912 kB' 'Inactive: 137452 kB' 'Active(anon): 3186428 kB' 'Inactive(anon): 0 kB' 'Active(file): 119484 kB' 'Inactive(file): 137452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3075052 kB' 'Mapped: 98532 kB' 'AnonPages: 368420 kB' 'Shmem: 2818116 kB' 'KernelStack: 5352 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102160 kB' 'Slab: 258484 kB' 'SReclaimable: 102160 kB' 'SUnreclaim: 156324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.243 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.244 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.245 node0=512 expecting 512 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:41.245 node1=512 expecting 512 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:41.245 00:04:41.245 real 0m1.387s 00:04:41.245 user 0m0.586s 00:04:41.245 sys 0m0.764s 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.245 18:37:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.245 ************************************ 00:04:41.245 END TEST even_2G_alloc 00:04:41.245 ************************************ 00:04:41.245 18:37:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:41.245 18:37:53 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.245 18:37:53 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.245 18:37:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.245 ************************************ 00:04:41.245 START TEST odd_alloc 00:04:41.245 ************************************ 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.245 18:37:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.625 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.625 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.625 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.625 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.625 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.625 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.625 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.625 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.625 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.625 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.625 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.625 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.625 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.625 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.625 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.625 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.625 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43979900 kB' 'MemAvailable: 47462160 kB' 'Buffers: 2704 kB' 'Cached: 12180272 kB' 'SwapCached: 0 kB' 'Active: 9176864 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789096 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488504 kB' 'Mapped: 192064 kB' 'Shmem: 8303480 kB' 'KReclaimable: 193208 kB' 'Slab: 552392 kB' 'SReclaimable: 193208 kB' 'SUnreclaim: 359184 kB' 'KernelStack: 12688 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9869020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.625 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43984140 kB' 'MemAvailable: 47466400 kB' 'Buffers: 2704 kB' 'Cached: 12180276 kB' 'SwapCached: 0 kB' 'Active: 9176500 kB' 'Inactive: 3491728 kB' 'Active(anon): 8788732 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488572 kB' 'Mapped: 192128 kB' 'Shmem: 8303484 kB' 'KReclaimable: 193208 kB' 'Slab: 552404 kB' 'SReclaimable: 193208 kB' 'SUnreclaim: 359196 kB' 'KernelStack: 12704 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9869040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.626 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43985760 kB' 'MemAvailable: 47468020 kB' 'Buffers: 2704 kB' 'Cached: 12180292 kB' 'SwapCached: 0 kB' 'Active: 9176332 kB' 'Inactive: 3491728 kB' 'Active(anon): 8788564 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488368 kB' 'Mapped: 192052 kB' 'Shmem: 8303500 kB' 'KReclaimable: 193208 kB' 'Slab: 552428 kB' 'SReclaimable: 193208 kB' 'SUnreclaim: 359220 kB' 'KernelStack: 12688 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9869060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.627 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.628 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:42.629 nr_hugepages=1025 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.629 resv_hugepages=0 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.629 surplus_hugepages=0 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.629 anon_hugepages=0 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43987436 kB' 'MemAvailable: 47469696 kB' 'Buffers: 2704 kB' 'Cached: 12180312 kB' 'SwapCached: 0 kB' 'Active: 9176640 kB' 'Inactive: 3491728 kB' 'Active(anon): 8788872 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488648 kB' 'Mapped: 192052 kB' 'Shmem: 8303520 kB' 'KReclaimable: 193208 kB' 'Slab: 552428 kB' 'SReclaimable: 193208 kB' 'SUnreclaim: 359220 kB' 'KernelStack: 12720 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9871448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.629 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21692076 kB' 'MemUsed: 11184864 kB' 'SwapCached: 0 kB' 'Active: 5874276 kB' 'Inactive: 3354276 kB' 'Active(anon): 5605992 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354276 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9107884 kB' 'Mapped: 93772 kB' 'AnonPages: 123804 kB' 'Shmem: 5485324 kB' 'KernelStack: 7544 kB' 'PageTables: 3428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91056 kB' 'Slab: 293920 kB' 'SReclaimable: 91056 kB' 'SUnreclaim: 202864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.630 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22294920 kB' 'MemUsed: 5369852 kB' 'SwapCached: 0 kB' 'Active: 3302552 kB' 'Inactive: 137452 kB' 'Active(anon): 3183068 kB' 'Inactive(anon): 0 kB' 'Active(file): 119484 kB' 'Inactive(file): 137452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3075132 kB' 'Mapped: 98288 kB' 'AnonPages: 365036 kB' 'Shmem: 2818196 kB' 'KernelStack: 5256 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102152 kB' 'Slab: 258484 kB' 'SReclaimable: 102152 kB' 'SUnreclaim: 156332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.631 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:42.632 node0=512 expecting 513 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:42.632 node1=513 expecting 512 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:42.632 00:04:42.632 real 0m1.401s 00:04:42.632 user 0m0.555s 00:04:42.632 sys 0m0.802s 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.632 18:37:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.890 ************************************ 00:04:42.890 END TEST odd_alloc 00:04:42.890 ************************************ 00:04:42.890 18:37:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:42.890 18:37:54 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.890 18:37:54 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.890 18:37:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.890 ************************************ 00:04:42.890 START TEST custom_alloc 00:04:42.890 ************************************ 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.890 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.891 18:37:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.828 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.828 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.828 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.828 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.828 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.828 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.828 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.828 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.828 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.828 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.828 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.828 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.828 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.828 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.828 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.828 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.828 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42918140 kB' 'MemAvailable: 46400400 kB' 'Buffers: 2704 kB' 'Cached: 12180408 kB' 'SwapCached: 0 kB' 'Active: 9176588 kB' 'Inactive: 3491728 kB' 'Active(anon): 8788820 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488292 kB' 'Mapped: 192112 kB' 'Shmem: 8303616 kB' 'KReclaimable: 193208 kB' 'Slab: 552480 kB' 'SReclaimable: 193208 kB' 'SUnreclaim: 359272 kB' 'KernelStack: 12592 kB' 'PageTables: 7148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9869284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.094 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.095 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42920180 kB' 'MemAvailable: 46402440 kB' 'Buffers: 2704 kB' 'Cached: 12180408 kB' 'SwapCached: 0 kB' 'Active: 9177028 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789260 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488824 kB' 'Mapped: 192140 kB' 'Shmem: 8303616 kB' 'KReclaimable: 193208 kB' 'Slab: 552552 kB' 'SReclaimable: 193208 kB' 'SUnreclaim: 359344 kB' 'KernelStack: 12752 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9869304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.096 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.097 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42920932 kB' 'MemAvailable: 46403192 kB' 'Buffers: 2704 kB' 'Cached: 12180420 kB' 'SwapCached: 0 kB' 'Active: 9176444 kB' 'Inactive: 3491728 kB' 'Active(anon): 8788676 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488216 kB' 'Mapped: 192064 kB' 'Shmem: 8303628 kB' 'KReclaimable: 193208 kB' 'Slab: 552508 kB' 'SReclaimable: 193208 kB' 'SUnreclaim: 359300 kB' 'KernelStack: 12736 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9869324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.099 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:44.100 nr_hugepages=1536 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.100 resv_hugepages=0 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.100 surplus_hugepages=0 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.100 anon_hugepages=0 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42921644 kB' 'MemAvailable: 46403904 kB' 'Buffers: 2704 kB' 'Cached: 12180448 kB' 'SwapCached: 0 kB' 'Active: 9176828 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789060 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488672 kB' 'Mapped: 192064 kB' 'Shmem: 8303656 kB' 'KReclaimable: 193208 kB' 'Slab: 552492 kB' 'SReclaimable: 193208 kB' 'SUnreclaim: 359284 kB' 'KernelStack: 12720 kB' 'PageTables: 7572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9869344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21680988 kB' 'MemUsed: 11195952 kB' 'SwapCached: 0 kB' 'Active: 5874612 kB' 'Inactive: 3354276 kB' 'Active(anon): 5606328 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354276 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9107972 kB' 'Mapped: 93776 kB' 'AnonPages: 124068 kB' 'Shmem: 5485412 kB' 'KernelStack: 7448 kB' 'PageTables: 3516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91056 kB' 'Slab: 293956 kB' 'SReclaimable: 91056 kB' 'SUnreclaim: 202900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21240656 kB' 'MemUsed: 6424116 kB' 'SwapCached: 0 kB' 'Active: 3302240 kB' 'Inactive: 137452 kB' 'Active(anon): 3182756 kB' 'Inactive(anon): 0 kB' 'Active(file): 119484 kB' 'Inactive(file): 137452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3075204 kB' 'Mapped: 98288 kB' 'AnonPages: 364524 kB' 'Shmem: 2818268 kB' 'KernelStack: 5288 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102144 kB' 'Slab: 258528 kB' 'SReclaimable: 102144 kB' 'SUnreclaim: 156384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.104 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.105 node0=512 expecting 512 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:44.105 node1=1024 expecting 1024 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:44.105 00:04:44.105 real 0m1.331s 00:04:44.105 user 0m0.573s 00:04:44.105 sys 0m0.719s 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.105 18:37:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.105 ************************************ 00:04:44.105 END TEST custom_alloc 00:04:44.105 ************************************ 00:04:44.105 18:37:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:44.105 18:37:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.105 18:37:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.105 18:37:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.105 ************************************ 00:04:44.105 START TEST no_shrink_alloc 00:04:44.105 ************************************ 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.105 18:37:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.098 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.098 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:45.098 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.098 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.098 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.098 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.098 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.098 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.098 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.098 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.362 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.362 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.362 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.362 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.362 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.362 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.362 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43942224 kB' 'MemAvailable: 47424480 kB' 'Buffers: 2704 kB' 'Cached: 12180536 kB' 'SwapCached: 0 kB' 'Active: 9177648 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789880 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489364 kB' 'Mapped: 192192 kB' 'Shmem: 8303744 kB' 'KReclaimable: 193200 kB' 'Slab: 552584 kB' 'SReclaimable: 193200 kB' 'SUnreclaim: 359384 kB' 'KernelStack: 12784 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9869544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.362 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.363 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43942516 kB' 'MemAvailable: 47424772 kB' 'Buffers: 2704 kB' 'Cached: 12180536 kB' 'SwapCached: 0 kB' 'Active: 9177324 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789556 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489044 kB' 'Mapped: 192156 kB' 'Shmem: 8303744 kB' 'KReclaimable: 193200 kB' 'Slab: 552568 kB' 'SReclaimable: 193200 kB' 'SUnreclaim: 359368 kB' 'KernelStack: 12768 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9869560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.364 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43942864 kB' 'MemAvailable: 47425120 kB' 'Buffers: 2704 kB' 'Cached: 12180572 kB' 'SwapCached: 0 kB' 'Active: 9177000 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789232 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488644 kB' 'Mapped: 192076 kB' 'Shmem: 8303780 kB' 'KReclaimable: 193200 kB' 'Slab: 552552 kB' 'SReclaimable: 193200 kB' 'SUnreclaim: 359352 kB' 'KernelStack: 12736 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9869584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.366 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.367 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.368 nr_hugepages=1024 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.368 resv_hugepages=0 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.368 surplus_hugepages=0 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.368 anon_hugepages=0 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43942864 kB' 'MemAvailable: 47425120 kB' 'Buffers: 2704 kB' 'Cached: 12180576 kB' 'SwapCached: 0 kB' 'Active: 9177068 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789300 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488676 kB' 'Mapped: 192076 kB' 'Shmem: 8303784 kB' 'KReclaimable: 193200 kB' 'Slab: 552552 kB' 'SReclaimable: 193200 kB' 'SUnreclaim: 359352 kB' 'KernelStack: 12752 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9869604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.368 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.369 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.629 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.630 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20620584 kB' 'MemUsed: 12256356 kB' 'SwapCached: 0 kB' 'Active: 5874796 kB' 'Inactive: 3354276 kB' 'Active(anon): 5606512 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354276 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9108092 kB' 'Mapped: 93788 kB' 'AnonPages: 124148 kB' 'Shmem: 5485532 kB' 'KernelStack: 7448 kB' 'PageTables: 3512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91056 kB' 'Slab: 294032 kB' 'SReclaimable: 91056 kB' 'SUnreclaim: 202976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.631 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.632 node0=1024 expecting 1024 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.632 18:37:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.568 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.568 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.568 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.568 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.568 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.568 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.568 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.568 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.568 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.568 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.568 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.568 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.568 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.568 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.568 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.569 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.569 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.569 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43968616 kB' 'MemAvailable: 47450864 kB' 'Buffers: 2704 kB' 'Cached: 12180640 kB' 'SwapCached: 0 kB' 'Active: 9177252 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789484 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488844 kB' 'Mapped: 192092 kB' 'Shmem: 8303848 kB' 'KReclaimable: 193184 kB' 'Slab: 552224 kB' 'SReclaimable: 193184 kB' 'SUnreclaim: 359040 kB' 'KernelStack: 12736 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9869652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.569 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.833 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43968492 kB' 'MemAvailable: 47450740 kB' 'Buffers: 2704 kB' 'Cached: 12180640 kB' 'SwapCached: 0 kB' 'Active: 9177016 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789248 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488544 kB' 'Mapped: 192080 kB' 'Shmem: 8303848 kB' 'KReclaimable: 193184 kB' 'Slab: 552192 kB' 'SReclaimable: 193184 kB' 'SUnreclaim: 359008 kB' 'KernelStack: 12752 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9869668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.834 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.835 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43967988 kB' 'MemAvailable: 47450236 kB' 'Buffers: 2704 kB' 'Cached: 12180664 kB' 'SwapCached: 0 kB' 'Active: 9177232 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789464 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488752 kB' 'Mapped: 192080 kB' 'Shmem: 8303872 kB' 'KReclaimable: 193184 kB' 'Slab: 552236 kB' 'SReclaimable: 193184 kB' 'SUnreclaim: 359052 kB' 'KernelStack: 12784 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9869692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.836 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.837 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.838 nr_hugepages=1024 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.838 resv_hugepages=0 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.838 surplus_hugepages=0 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.838 anon_hugepages=0 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43967976 kB' 'MemAvailable: 47450224 kB' 'Buffers: 2704 kB' 'Cached: 12180684 kB' 'SwapCached: 0 kB' 'Active: 9177232 kB' 'Inactive: 3491728 kB' 'Active(anon): 8789464 kB' 'Inactive(anon): 0 kB' 'Active(file): 387768 kB' 'Inactive(file): 3491728 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488716 kB' 'Mapped: 192080 kB' 'Shmem: 8303892 kB' 'KReclaimable: 193184 kB' 'Slab: 552236 kB' 'SReclaimable: 193184 kB' 'SUnreclaim: 359052 kB' 'KernelStack: 12768 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9869712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1703516 kB' 'DirectMap2M: 13944832 kB' 'DirectMap1G: 53477376 kB' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.838 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.839 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20633588 kB' 'MemUsed: 12243352 kB' 'SwapCached: 0 kB' 'Active: 5874384 kB' 'Inactive: 3354276 kB' 'Active(anon): 5606100 kB' 'Inactive(anon): 0 kB' 'Active(file): 268284 kB' 'Inactive(file): 3354276 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9108196 kB' 'Mapped: 94228 kB' 'AnonPages: 123628 kB' 'Shmem: 5485636 kB' 'KernelStack: 7448 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91056 kB' 'Slab: 293840 kB' 'SReclaimable: 91056 kB' 'SUnreclaim: 202784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.840 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.841 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.842 node0=1024 expecting 1024 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.842 00:04:46.842 real 0m2.652s 00:04:46.842 user 0m1.091s 00:04:46.842 sys 0m1.476s 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.842 18:37:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.842 ************************************ 00:04:46.842 END TEST no_shrink_alloc 00:04:46.842 ************************************ 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.842 18:37:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.842 00:04:46.842 real 0m11.031s 00:04:46.842 user 0m4.281s 00:04:46.842 sys 0m5.678s 00:04:46.842 18:37:58 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.842 18:37:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.842 ************************************ 00:04:46.842 END TEST hugepages 00:04:46.842 ************************************ 00:04:46.842 18:37:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:46.842 18:37:58 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.842 18:37:58 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.842 18:37:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.842 ************************************ 00:04:46.842 START TEST driver 00:04:46.842 ************************************ 00:04:46.842 18:37:58 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:46.842 * Looking for test storage... 00:04:46.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:46.842 18:37:58 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:46.842 18:37:58 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.842 18:37:58 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.375 18:38:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:49.375 18:38:01 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.375 18:38:01 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.375 18:38:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:49.375 ************************************ 00:04:49.375 START TEST guess_driver 00:04:49.375 ************************************ 00:04:49.375 18:38:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:49.375 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:49.375 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:49.376 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:49.376 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:49.376 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:49.376 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:49.376 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:49.376 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:49.376 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:49.376 Looking for driver=vfio-pci 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.376 18:38:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.751 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.752 18:38:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.702 18:38:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.702 18:38:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.702 18:38:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.702 18:38:03 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:51.702 18:38:03 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:51.702 18:38:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.702 18:38:03 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.235 00:04:54.235 real 0m4.844s 00:04:54.235 user 0m1.084s 00:04:54.235 sys 0m1.831s 00:04:54.235 18:38:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.235 18:38:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:54.235 ************************************ 00:04:54.235 END TEST guess_driver 00:04:54.235 ************************************ 00:04:54.235 00:04:54.235 real 0m7.322s 00:04:54.235 user 0m1.594s 00:04:54.235 sys 0m2.791s 00:04:54.235 18:38:05 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.235 18:38:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:54.235 ************************************ 00:04:54.235 END TEST driver 00:04:54.235 ************************************ 00:04:54.235 18:38:05 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:54.235 18:38:05 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.235 18:38:05 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.235 18:38:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:54.235 ************************************ 00:04:54.235 START TEST devices 00:04:54.235 ************************************ 00:04:54.235 18:38:06 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:54.235 * Looking for test storage... 00:04:54.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:54.235 18:38:06 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:54.235 18:38:06 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:54.235 18:38:06 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.235 18:38:06 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:56.138 18:38:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:56.138 18:38:07 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:56.138 No valid GPT data, bailing 00:04:56.138 18:38:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:56.138 18:38:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:56.138 18:38:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:56.138 18:38:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:56.138 18:38:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:56.138 18:38:07 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:56.138 18:38:07 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.138 18:38:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:56.138 ************************************ 00:04:56.138 START TEST nvme_mount 00:04:56.138 ************************************ 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:56.138 18:38:07 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:57.074 Creating new GPT entries in memory. 00:04:57.074 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.074 other utilities. 00:04:57.074 18:38:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.074 18:38:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.074 18:38:08 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.074 18:38:08 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.075 18:38:08 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:58.009 Creating new GPT entries in memory. 00:04:58.009 The operation has completed successfully. 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3391358 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.009 18:38:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.383 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.384 18:38:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.384 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.384 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.643 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:59.643 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:59.643 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.643 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.643 18:38:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.578 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.837 18:38:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.211 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.211 00:05:02.211 real 0m6.324s 00:05:02.211 user 0m1.483s 00:05:02.211 sys 0m2.402s 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.211 18:38:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:02.211 ************************************ 00:05:02.211 END TEST nvme_mount 00:05:02.211 ************************************ 00:05:02.211 18:38:13 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:02.212 18:38:13 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.212 18:38:13 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.212 18:38:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:02.212 ************************************ 00:05:02.212 START TEST dm_mount 00:05:02.212 ************************************ 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:02.212 18:38:13 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:03.147 Creating new GPT entries in memory. 00:05:03.147 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:03.147 other utilities. 00:05:03.147 18:38:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:03.147 18:38:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.147 18:38:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.147 18:38:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.147 18:38:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:04.524 Creating new GPT entries in memory. 00:05:04.524 The operation has completed successfully. 00:05:04.524 18:38:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.524 18:38:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.524 18:38:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.524 18:38:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.524 18:38:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:05.489 The operation has completed successfully. 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3393747 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.489 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.490 18:38:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.425 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.683 18:38:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:08.059 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:08.059 00:05:08.059 real 0m5.837s 00:05:08.059 user 0m0.983s 00:05:08.059 sys 0m1.691s 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.059 18:38:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:08.059 ************************************ 00:05:08.059 END TEST dm_mount 00:05:08.059 ************************************ 00:05:08.059 18:38:19 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:08.059 18:38:19 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:08.059 18:38:19 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.059 18:38:19 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.059 18:38:19 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.059 18:38:19 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.059 18:38:19 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.317 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.317 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.317 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.317 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.317 18:38:20 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.317 18:38:20 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.317 18:38:20 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.317 18:38:20 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.317 18:38:20 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.317 18:38:20 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.317 18:38:20 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.317 00:05:08.317 real 0m14.116s 00:05:08.317 user 0m3.141s 00:05:08.317 sys 0m5.136s 00:05:08.317 18:38:20 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.317 18:38:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.317 ************************************ 00:05:08.317 END TEST devices 00:05:08.317 ************************************ 00:05:08.317 00:05:08.317 real 0m43.060s 00:05:08.317 user 0m12.310s 00:05:08.317 sys 0m18.849s 00:05:08.317 18:38:20 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.317 18:38:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.317 ************************************ 00:05:08.317 END TEST setup.sh 00:05:08.317 ************************************ 00:05:08.317 18:38:20 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:09.688 Hugepages 00:05:09.688 node hugesize free / total 00:05:09.689 node0 1048576kB 0 / 0 00:05:09.689 node0 2048kB 2048 / 2048 00:05:09.689 node1 1048576kB 0 / 0 00:05:09.689 node1 2048kB 0 / 0 00:05:09.689 00:05:09.689 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:09.689 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:09.689 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:09.689 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:09.689 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:09.689 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:09.689 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:09.689 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:09.689 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:09.689 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:09.689 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:09.689 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:09.689 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:09.689 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:09.689 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:09.689 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:09.689 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:09.689 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:09.689 18:38:21 -- spdk/autotest.sh@130 -- # uname -s 00:05:09.689 18:38:21 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:09.689 18:38:21 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:09.689 18:38:21 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.622 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.880 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.880 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.880 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.880 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.880 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.880 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.880 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.880 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.880 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.880 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.880 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.880 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.880 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.880 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.880 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:11.816 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.816 18:38:23 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:13.192 18:38:24 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:13.192 18:38:24 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:13.192 18:38:24 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:13.192 18:38:24 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:13.192 18:38:24 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:13.192 18:38:24 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:13.192 18:38:24 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.192 18:38:24 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.192 18:38:24 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:13.192 18:38:24 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:13.192 18:38:24 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:13.192 18:38:24 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.128 Waiting for block devices as requested 00:05:14.128 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:14.387 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:14.387 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:14.387 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:14.646 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:14.646 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:14.646 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:14.646 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:14.904 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:14.904 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:14.904 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:14.904 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:15.161 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:15.161 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:15.161 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:15.418 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:15.418 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:15.418 18:38:27 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:15.418 18:38:27 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:15.418 18:38:27 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:15.418 18:38:27 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:15.418 18:38:27 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:15.418 18:38:27 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:15.418 18:38:27 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:15.418 18:38:27 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:15.418 18:38:27 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:15.418 18:38:27 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:15.418 18:38:27 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:15.418 18:38:27 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:15.418 18:38:27 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:15.418 18:38:27 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:15.418 18:38:27 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:15.418 18:38:27 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:15.418 18:38:27 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:15.418 18:38:27 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:15.418 18:38:27 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:15.418 18:38:27 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:15.418 18:38:27 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:15.418 18:38:27 -- common/autotest_common.sh@1553 -- # continue 00:05:15.418 18:38:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:15.418 18:38:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.418 18:38:27 -- common/autotest_common.sh@10 -- # set +x 00:05:15.418 18:38:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:15.418 18:38:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:15.418 18:38:27 -- common/autotest_common.sh@10 -- # set +x 00:05:15.418 18:38:27 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.791 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.791 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.791 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.791 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.791 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.791 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.791 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.791 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:16.791 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.791 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.791 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.791 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.791 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.791 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.791 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.791 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:17.725 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:17.725 18:38:29 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:17.725 18:38:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.725 18:38:29 -- common/autotest_common.sh@10 -- # set +x 00:05:17.983 18:38:29 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:17.983 18:38:29 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:17.983 18:38:29 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:17.983 18:38:29 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:17.983 18:38:29 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:17.983 18:38:29 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:17.983 18:38:29 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:17.983 18:38:29 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:17.983 18:38:29 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.983 18:38:29 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:17.983 18:38:29 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:17.983 18:38:29 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:17.983 18:38:29 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:17.983 18:38:29 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:17.983 18:38:29 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:17.983 18:38:29 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:17.983 18:38:29 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:17.983 18:38:29 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:17.983 18:38:29 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:17.983 18:38:29 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:17.983 18:38:29 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3398927 00:05:17.983 18:38:29 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.983 18:38:29 -- common/autotest_common.sh@1594 -- # waitforlisten 3398927 00:05:17.983 18:38:29 -- common/autotest_common.sh@827 -- # '[' -z 3398927 ']' 00:05:17.983 18:38:29 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.983 18:38:29 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.983 18:38:29 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.983 18:38:29 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.983 18:38:29 -- common/autotest_common.sh@10 -- # set +x 00:05:17.983 [2024-07-25 18:38:29.737558] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:17.983 [2024-07-25 18:38:29.737659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398927 ] 00:05:17.983 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.983 [2024-07-25 18:38:29.799560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.241 [2024-07-25 18:38:29.889291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.499 18:38:30 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.499 18:38:30 -- common/autotest_common.sh@860 -- # return 0 00:05:18.499 18:38:30 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:18.499 18:38:30 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:18.499 18:38:30 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:21.779 nvme0n1 00:05:21.779 18:38:33 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:21.779 [2024-07-25 18:38:33.449861] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:21.779 [2024-07-25 18:38:33.449907] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:21.779 request: 00:05:21.779 { 00:05:21.779 "nvme_ctrlr_name": "nvme0", 00:05:21.779 "password": "test", 00:05:21.779 "method": "bdev_nvme_opal_revert", 00:05:21.779 "req_id": 1 00:05:21.779 } 00:05:21.779 Got JSON-RPC error response 00:05:21.779 response: 00:05:21.779 { 00:05:21.779 "code": -32603, 00:05:21.779 "message": "Internal error" 00:05:21.779 } 00:05:21.779 18:38:33 -- common/autotest_common.sh@1600 -- # true 00:05:21.779 18:38:33 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:21.779 18:38:33 -- common/autotest_common.sh@1604 -- # killprocess 3398927 00:05:21.779 18:38:33 -- common/autotest_common.sh@946 -- # '[' -z 3398927 ']' 00:05:21.779 18:38:33 -- common/autotest_common.sh@950 -- # kill -0 3398927 00:05:21.779 18:38:33 -- common/autotest_common.sh@951 -- # uname 00:05:21.779 18:38:33 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:21.779 18:38:33 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3398927 00:05:21.779 18:38:33 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:21.779 18:38:33 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:21.779 18:38:33 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3398927' 00:05:21.779 killing process with pid 3398927 00:05:21.779 18:38:33 -- common/autotest_common.sh@965 -- # kill 3398927 00:05:21.779 18:38:33 -- common/autotest_common.sh@970 -- # wait 3398927 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.779 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:21.780 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:23.680 18:38:35 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:23.680 18:38:35 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:23.680 18:38:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:23.680 18:38:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:23.680 18:38:35 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:23.680 18:38:35 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:23.680 18:38:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.680 18:38:35 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:23.680 18:38:35 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.680 18:38:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.680 18:38:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.680 18:38:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.680 ************************************ 00:05:23.680 START TEST env 00:05:23.680 ************************************ 00:05:23.680 18:38:35 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.680 * Looking for test storage... 00:05:23.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:23.680 18:38:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.680 18:38:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.680 18:38:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.680 18:38:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.680 ************************************ 00:05:23.680 START TEST env_memory 00:05:23.680 ************************************ 00:05:23.680 18:38:35 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.680 00:05:23.680 00:05:23.680 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.680 http://cunit.sourceforge.net/ 00:05:23.680 00:05:23.680 00:05:23.680 Suite: memory 00:05:23.680 Test: alloc and free memory map ...[2024-07-25 18:38:35.429827] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.680 passed 00:05:23.680 Test: mem map translation ...[2024-07-25 18:38:35.454668] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.680 [2024-07-25 18:38:35.454697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.680 [2024-07-25 18:38:35.454760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.680 [2024-07-25 18:38:35.454775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:23.680 passed 00:05:23.680 Test: mem map registration ...[2024-07-25 18:38:35.507001] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:23.680 [2024-07-25 18:38:35.507026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:23.680 passed 00:05:23.939 Test: mem map adjacent registrations ...passed 00:05:23.939 00:05:23.939 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.939 suites 1 1 n/a 0 0 00:05:23.939 tests 4 4 4 0 0 00:05:23.939 asserts 152 152 152 0 n/a 00:05:23.939 00:05:23.939 Elapsed time = 0.173 seconds 00:05:23.939 00:05:23.939 real 0m0.181s 00:05:23.939 user 0m0.175s 00:05:23.939 sys 0m0.006s 00:05:23.939 18:38:35 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.939 18:38:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 ************************************ 00:05:23.939 END TEST env_memory 00:05:23.939 ************************************ 00:05:23.939 18:38:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.939 18:38:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.939 18:38:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.939 18:38:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 ************************************ 00:05:23.939 START TEST env_vtophys 00:05:23.939 ************************************ 00:05:23.939 18:38:35 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.939 EAL: lib.eal log level changed from notice to debug 00:05:23.939 EAL: Detected lcore 0 as core 0 on socket 0 00:05:23.939 EAL: Detected lcore 1 as core 1 on socket 0 00:05:23.939 EAL: Detected lcore 2 as core 2 on socket 0 00:05:23.939 EAL: Detected lcore 3 as core 3 on socket 0 00:05:23.939 EAL: Detected lcore 4 as core 4 on socket 0 00:05:23.939 EAL: Detected lcore 5 as core 5 on socket 0 00:05:23.939 EAL: Detected lcore 6 as core 8 on socket 0 00:05:23.939 EAL: Detected lcore 7 as core 9 on socket 0 00:05:23.939 EAL: Detected lcore 8 as core 10 on socket 0 00:05:23.939 EAL: Detected lcore 9 as core 11 on socket 0 00:05:23.939 EAL: Detected lcore 10 as core 12 on socket 0 00:05:23.939 EAL: Detected lcore 11 as core 13 on socket 0 00:05:23.939 EAL: Detected lcore 12 as core 0 on socket 1 00:05:23.939 EAL: Detected lcore 13 as core 1 on socket 1 00:05:23.939 EAL: Detected lcore 14 as core 2 on socket 1 00:05:23.939 EAL: Detected lcore 15 as core 3 on socket 1 00:05:23.939 EAL: Detected lcore 16 as core 4 on socket 1 00:05:23.939 EAL: Detected lcore 17 as core 5 on socket 1 00:05:23.939 EAL: Detected lcore 18 as core 8 on socket 1 00:05:23.939 EAL: Detected lcore 19 as core 9 on socket 1 00:05:23.939 EAL: Detected lcore 20 as core 10 on socket 1 00:05:23.939 EAL: Detected lcore 21 as core 11 on socket 1 00:05:23.939 EAL: Detected lcore 22 as core 12 on socket 1 00:05:23.939 EAL: Detected lcore 23 as core 13 on socket 1 00:05:23.939 EAL: Detected lcore 24 as core 0 on socket 0 00:05:23.939 EAL: Detected lcore 25 as core 1 on socket 0 00:05:23.939 EAL: Detected lcore 26 as core 2 on socket 0 00:05:23.939 EAL: Detected lcore 27 as core 3 on socket 0 00:05:23.939 EAL: Detected lcore 28 as core 4 on socket 0 00:05:23.939 EAL: Detected lcore 29 as core 5 on socket 0 00:05:23.939 EAL: Detected lcore 30 as core 8 on socket 0 00:05:23.939 EAL: Detected lcore 31 as core 9 on socket 0 00:05:23.939 EAL: Detected lcore 32 as core 10 on socket 0 00:05:23.939 EAL: Detected lcore 33 as core 11 on socket 0 00:05:23.939 EAL: Detected lcore 34 as core 12 on socket 0 00:05:23.939 EAL: Detected lcore 35 as core 13 on socket 0 00:05:23.939 EAL: Detected lcore 36 as core 0 on socket 1 00:05:23.939 EAL: Detected lcore 37 as core 1 on socket 1 00:05:23.939 EAL: Detected lcore 38 as core 2 on socket 1 00:05:23.939 EAL: Detected lcore 39 as core 3 on socket 1 00:05:23.939 EAL: Detected lcore 40 as core 4 on socket 1 00:05:23.939 EAL: Detected lcore 41 as core 5 on socket 1 00:05:23.939 EAL: Detected lcore 42 as core 8 on socket 1 00:05:23.939 EAL: Detected lcore 43 as core 9 on socket 1 00:05:23.939 EAL: Detected lcore 44 as core 10 on socket 1 00:05:23.939 EAL: Detected lcore 45 as core 11 on socket 1 00:05:23.939 EAL: Detected lcore 46 as core 12 on socket 1 00:05:23.939 EAL: Detected lcore 47 as core 13 on socket 1 00:05:23.939 EAL: Maximum logical cores by configuration: 128 00:05:23.939 EAL: Detected CPU lcores: 48 00:05:23.939 EAL: Detected NUMA nodes: 2 00:05:23.939 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:23.939 EAL: Detected shared linkage of DPDK 00:05:23.939 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:23.939 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:23.939 EAL: Registered [vdev] bus. 00:05:23.939 EAL: bus.vdev log level changed from disabled to notice 00:05:23.939 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:23.939 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:23.939 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:23.939 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:23.940 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:23.940 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:23.940 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:23.940 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:23.940 EAL: No shared files mode enabled, IPC will be disabled 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: Bus pci wants IOVA as 'DC' 00:05:23.940 EAL: Bus vdev wants IOVA as 'DC' 00:05:23.940 EAL: Buses did not request a specific IOVA mode. 00:05:23.940 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:23.940 EAL: Selected IOVA mode 'VA' 00:05:23.940 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.940 EAL: Probing VFIO support... 00:05:23.940 EAL: IOMMU type 1 (Type 1) is supported 00:05:23.940 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:23.940 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:23.940 EAL: VFIO support initialized 00:05:23.940 EAL: Ask a virtual area of 0x2e000 bytes 00:05:23.940 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:23.940 EAL: Setting up physically contiguous memory... 00:05:23.940 EAL: Setting maximum number of open files to 524288 00:05:23.940 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:23.940 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:23.940 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:23.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.940 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:23.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.940 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:23.940 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:23.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.940 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:23.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.940 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:23.940 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:23.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.940 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:23.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.940 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:23.940 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:23.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.940 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:23.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.940 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:23.940 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:23.940 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:23.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.940 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:23.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.940 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:23.940 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:23.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.940 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:23.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.940 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:23.940 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:23.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.940 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:23.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.940 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:23.940 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:23.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.940 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:23.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.940 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:23.940 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:23.940 EAL: Hugepages will be freed exactly as allocated. 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: TSC frequency is ~2700000 KHz 00:05:23.940 EAL: Main lcore 0 is ready (tid=7f5064339a00;cpuset=[0]) 00:05:23.940 EAL: Trying to obtain current memory policy. 00:05:23.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.940 EAL: Restoring previous memory policy: 0 00:05:23.940 EAL: request: mp_malloc_sync 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: Heap on socket 0 was expanded by 2MB 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:23.940 EAL: Mem event callback 'spdk:(nil)' registered 00:05:23.940 00:05:23.940 00:05:23.940 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.940 http://cunit.sourceforge.net/ 00:05:23.940 00:05:23.940 00:05:23.940 Suite: components_suite 00:05:23.940 Test: vtophys_malloc_test ...passed 00:05:23.940 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:23.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.940 EAL: Restoring previous memory policy: 4 00:05:23.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.940 EAL: request: mp_malloc_sync 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: Heap on socket 0 was expanded by 4MB 00:05:23.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.940 EAL: request: mp_malloc_sync 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: Heap on socket 0 was shrunk by 4MB 00:05:23.940 EAL: Trying to obtain current memory policy. 00:05:23.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.940 EAL: Restoring previous memory policy: 4 00:05:23.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.940 EAL: request: mp_malloc_sync 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: Heap on socket 0 was expanded by 6MB 00:05:23.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.940 EAL: request: mp_malloc_sync 00:05:23.940 EAL: No shared files mode enabled, IPC is disabled 00:05:23.940 EAL: Heap on socket 0 was shrunk by 6MB 00:05:23.940 EAL: Trying to obtain current memory policy. 00:05:23.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.941 EAL: Restoring previous memory policy: 4 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was expanded by 10MB 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was shrunk by 10MB 00:05:23.941 EAL: Trying to obtain current memory policy. 00:05:23.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.941 EAL: Restoring previous memory policy: 4 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was expanded by 18MB 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was shrunk by 18MB 00:05:23.941 EAL: Trying to obtain current memory policy. 00:05:23.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.941 EAL: Restoring previous memory policy: 4 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was expanded by 34MB 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was shrunk by 34MB 00:05:23.941 EAL: Trying to obtain current memory policy. 00:05:23.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.941 EAL: Restoring previous memory policy: 4 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was expanded by 66MB 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was shrunk by 66MB 00:05:23.941 EAL: Trying to obtain current memory policy. 00:05:23.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.941 EAL: Restoring previous memory policy: 4 00:05:23.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.941 EAL: request: mp_malloc_sync 00:05:23.941 EAL: No shared files mode enabled, IPC is disabled 00:05:23.941 EAL: Heap on socket 0 was expanded by 130MB 00:05:24.206 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.206 EAL: request: mp_malloc_sync 00:05:24.206 EAL: No shared files mode enabled, IPC is disabled 00:05:24.206 EAL: Heap on socket 0 was shrunk by 130MB 00:05:24.206 EAL: Trying to obtain current memory policy. 00:05:24.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.206 EAL: Restoring previous memory policy: 4 00:05:24.206 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.206 EAL: request: mp_malloc_sync 00:05:24.206 EAL: No shared files mode enabled, IPC is disabled 00:05:24.206 EAL: Heap on socket 0 was expanded by 258MB 00:05:24.206 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.206 EAL: request: mp_malloc_sync 00:05:24.206 EAL: No shared files mode enabled, IPC is disabled 00:05:24.206 EAL: Heap on socket 0 was shrunk by 258MB 00:05:24.206 EAL: Trying to obtain current memory policy. 00:05:24.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.478 EAL: Restoring previous memory policy: 4 00:05:24.478 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.478 EAL: request: mp_malloc_sync 00:05:24.478 EAL: No shared files mode enabled, IPC is disabled 00:05:24.478 EAL: Heap on socket 0 was expanded by 514MB 00:05:24.478 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.741 EAL: request: mp_malloc_sync 00:05:24.741 EAL: No shared files mode enabled, IPC is disabled 00:05:24.741 EAL: Heap on socket 0 was shrunk by 514MB 00:05:24.741 EAL: Trying to obtain current memory policy. 00:05:24.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.998 EAL: Restoring previous memory policy: 4 00:05:24.998 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.998 EAL: request: mp_malloc_sync 00:05:24.998 EAL: No shared files mode enabled, IPC is disabled 00:05:24.998 EAL: Heap on socket 0 was expanded by 1026MB 00:05:25.255 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.255 EAL: request: mp_malloc_sync 00:05:25.255 EAL: No shared files mode enabled, IPC is disabled 00:05:25.255 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:25.255 passed 00:05:25.255 00:05:25.255 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.255 suites 1 1 n/a 0 0 00:05:25.255 tests 2 2 2 0 0 00:05:25.255 asserts 497 497 497 0 n/a 00:05:25.255 00:05:25.255 Elapsed time = 1.374 seconds 00:05:25.255 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.255 EAL: request: mp_malloc_sync 00:05:25.255 EAL: No shared files mode enabled, IPC is disabled 00:05:25.255 EAL: Heap on socket 0 was shrunk by 2MB 00:05:25.255 EAL: No shared files mode enabled, IPC is disabled 00:05:25.256 EAL: No shared files mode enabled, IPC is disabled 00:05:25.256 EAL: No shared files mode enabled, IPC is disabled 00:05:25.256 00:05:25.256 real 0m1.491s 00:05:25.256 user 0m0.865s 00:05:25.256 sys 0m0.591s 00:05:25.256 18:38:37 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.256 18:38:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:25.256 ************************************ 00:05:25.256 END TEST env_vtophys 00:05:25.256 ************************************ 00:05:25.514 18:38:37 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.514 18:38:37 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.514 18:38:37 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.514 18:38:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.514 ************************************ 00:05:25.514 START TEST env_pci 00:05:25.514 ************************************ 00:05:25.514 18:38:37 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.514 00:05:25.514 00:05:25.514 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.514 http://cunit.sourceforge.net/ 00:05:25.514 00:05:25.514 00:05:25.514 Suite: pci 00:05:25.514 Test: pci_hook ...[2024-07-25 18:38:37.172548] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3399816 has claimed it 00:05:25.514 EAL: Cannot find device (10000:00:01.0) 00:05:25.514 EAL: Failed to attach device on primary process 00:05:25.514 passed 00:05:25.514 00:05:25.514 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.514 suites 1 1 n/a 0 0 00:05:25.514 tests 1 1 1 0 0 00:05:25.514 asserts 25 25 25 0 n/a 00:05:25.514 00:05:25.514 Elapsed time = 0.021 seconds 00:05:25.514 00:05:25.514 real 0m0.034s 00:05:25.514 user 0m0.011s 00:05:25.514 sys 0m0.023s 00:05:25.514 18:38:37 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.514 18:38:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:25.514 ************************************ 00:05:25.514 END TEST env_pci 00:05:25.514 ************************************ 00:05:25.514 18:38:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:25.514 18:38:37 env -- env/env.sh@15 -- # uname 00:05:25.514 18:38:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:25.514 18:38:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:25.514 18:38:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.514 18:38:37 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:25.514 18:38:37 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.514 18:38:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.514 ************************************ 00:05:25.514 START TEST env_dpdk_post_init 00:05:25.514 ************************************ 00:05:25.514 18:38:37 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.514 EAL: Detected CPU lcores: 48 00:05:25.514 EAL: Detected NUMA nodes: 2 00:05:25.514 EAL: Detected shared linkage of DPDK 00:05:25.514 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.514 EAL: Selected IOVA mode 'VA' 00:05:25.514 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.514 EAL: VFIO support initialized 00:05:25.514 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.514 EAL: Using IOMMU type 1 (Type 1) 00:05:25.514 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:25.514 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:25.514 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:25.773 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:26.707 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:29.986 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:29.986 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:29.986 Starting DPDK initialization... 00:05:29.986 Starting SPDK post initialization... 00:05:29.986 SPDK NVMe probe 00:05:29.986 Attaching to 0000:88:00.0 00:05:29.986 Attached to 0000:88:00.0 00:05:29.986 Cleaning up... 00:05:29.986 00:05:29.986 real 0m4.390s 00:05:29.986 user 0m3.249s 00:05:29.986 sys 0m0.204s 00:05:29.986 18:38:41 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.986 18:38:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.986 ************************************ 00:05:29.986 END TEST env_dpdk_post_init 00:05:29.986 ************************************ 00:05:29.986 18:38:41 env -- env/env.sh@26 -- # uname 00:05:29.986 18:38:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.986 18:38:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.986 18:38:41 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.986 18:38:41 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.986 18:38:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.986 ************************************ 00:05:29.986 START TEST env_mem_callbacks 00:05:29.986 ************************************ 00:05:29.986 18:38:41 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.986 EAL: Detected CPU lcores: 48 00:05:29.986 EAL: Detected NUMA nodes: 2 00:05:29.986 EAL: Detected shared linkage of DPDK 00:05:29.986 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.986 EAL: Selected IOVA mode 'VA' 00:05:29.986 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.986 EAL: VFIO support initialized 00:05:29.986 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.986 00:05:29.986 00:05:29.986 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.986 http://cunit.sourceforge.net/ 00:05:29.986 00:05:29.986 00:05:29.986 Suite: memory 00:05:29.986 Test: test ... 00:05:29.986 register 0x200000200000 2097152 00:05:29.986 malloc 3145728 00:05:29.986 register 0x200000400000 4194304 00:05:29.986 buf 0x200000500000 len 3145728 PASSED 00:05:29.986 malloc 64 00:05:29.986 buf 0x2000004fff40 len 64 PASSED 00:05:29.986 malloc 4194304 00:05:29.986 register 0x200000800000 6291456 00:05:29.986 buf 0x200000a00000 len 4194304 PASSED 00:05:29.986 free 0x200000500000 3145728 00:05:29.986 free 0x2000004fff40 64 00:05:29.986 unregister 0x200000400000 4194304 PASSED 00:05:29.986 free 0x200000a00000 4194304 00:05:29.986 unregister 0x200000800000 6291456 PASSED 00:05:29.986 malloc 8388608 00:05:29.986 register 0x200000400000 10485760 00:05:29.986 buf 0x200000600000 len 8388608 PASSED 00:05:29.986 free 0x200000600000 8388608 00:05:29.986 unregister 0x200000400000 10485760 PASSED 00:05:29.986 passed 00:05:29.986 00:05:29.986 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.986 suites 1 1 n/a 0 0 00:05:29.986 tests 1 1 1 0 0 00:05:29.986 asserts 15 15 15 0 n/a 00:05:29.986 00:05:29.986 Elapsed time = 0.005 seconds 00:05:29.986 00:05:29.986 real 0m0.046s 00:05:29.986 user 0m0.013s 00:05:29.986 sys 0m0.033s 00:05:29.986 18:38:41 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.986 18:38:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.986 ************************************ 00:05:29.986 END TEST env_mem_callbacks 00:05:29.986 ************************************ 00:05:29.986 00:05:29.986 real 0m6.430s 00:05:29.986 user 0m4.436s 00:05:29.986 sys 0m1.035s 00:05:29.986 18:38:41 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.986 18:38:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.986 ************************************ 00:05:29.986 END TEST env 00:05:29.986 ************************************ 00:05:29.986 18:38:41 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.986 18:38:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.986 18:38:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.986 18:38:41 -- common/autotest_common.sh@10 -- # set +x 00:05:29.986 ************************************ 00:05:29.986 START TEST rpc 00:05:29.986 ************************************ 00:05:29.986 18:38:41 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.986 * Looking for test storage... 00:05:29.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.986 18:38:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3400473 00:05:29.986 18:38:41 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:29.986 18:38:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.986 18:38:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3400473 00:05:29.986 18:38:41 rpc -- common/autotest_common.sh@827 -- # '[' -z 3400473 ']' 00:05:29.986 18:38:41 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.986 18:38:41 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.986 18:38:41 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.986 18:38:41 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.986 18:38:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.244 [2024-07-25 18:38:41.901737] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:30.244 [2024-07-25 18:38:41.901814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400473 ] 00:05:30.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.244 [2024-07-25 18:38:41.960514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.244 [2024-07-25 18:38:42.046945] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.244 [2024-07-25 18:38:42.047004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3400473' to capture a snapshot of events at runtime. 00:05:30.244 [2024-07-25 18:38:42.047033] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.244 [2024-07-25 18:38:42.047067] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.244 [2024-07-25 18:38:42.047097] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3400473 for offline analysis/debug. 00:05:30.244 [2024-07-25 18:38:42.047131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.502 18:38:42 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.502 18:38:42 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:30.502 18:38:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.502 18:38:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.502 18:38:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.502 18:38:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.502 18:38:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.502 18:38:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.502 18:38:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.502 ************************************ 00:05:30.502 START TEST rpc_integrity 00:05:30.502 ************************************ 00:05:30.502 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:30.502 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.502 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.502 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.502 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.502 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.502 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.502 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.502 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.502 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.502 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.760 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.760 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.760 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.761 { 00:05:30.761 "name": "Malloc0", 00:05:30.761 "aliases": [ 00:05:30.761 "a3a50aa0-f92e-4435-a434-f022ea553361" 00:05:30.761 ], 00:05:30.761 "product_name": "Malloc disk", 00:05:30.761 "block_size": 512, 00:05:30.761 "num_blocks": 16384, 00:05:30.761 "uuid": "a3a50aa0-f92e-4435-a434-f022ea553361", 00:05:30.761 "assigned_rate_limits": { 00:05:30.761 "rw_ios_per_sec": 0, 00:05:30.761 "rw_mbytes_per_sec": 0, 00:05:30.761 "r_mbytes_per_sec": 0, 00:05:30.761 "w_mbytes_per_sec": 0 00:05:30.761 }, 00:05:30.761 "claimed": false, 00:05:30.761 "zoned": false, 00:05:30.761 "supported_io_types": { 00:05:30.761 "read": true, 00:05:30.761 "write": true, 00:05:30.761 "unmap": true, 00:05:30.761 "write_zeroes": true, 00:05:30.761 "flush": true, 00:05:30.761 "reset": true, 00:05:30.761 "compare": false, 00:05:30.761 "compare_and_write": false, 00:05:30.761 "abort": true, 00:05:30.761 "nvme_admin": false, 00:05:30.761 "nvme_io": false 00:05:30.761 }, 00:05:30.761 "memory_domains": [ 00:05:30.761 { 00:05:30.761 "dma_device_id": "system", 00:05:30.761 "dma_device_type": 1 00:05:30.761 }, 00:05:30.761 { 00:05:30.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.761 "dma_device_type": 2 00:05:30.761 } 00:05:30.761 ], 00:05:30.761 "driver_specific": {} 00:05:30.761 } 00:05:30.761 ]' 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 [2024-07-25 18:38:42.435380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.761 [2024-07-25 18:38:42.435438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.761 [2024-07-25 18:38:42.435464] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xef48f0 00:05:30.761 [2024-07-25 18:38:42.435480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.761 [2024-07-25 18:38:42.436919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.761 [2024-07-25 18:38:42.436947] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.761 Passthru0 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.761 { 00:05:30.761 "name": "Malloc0", 00:05:30.761 "aliases": [ 00:05:30.761 "a3a50aa0-f92e-4435-a434-f022ea553361" 00:05:30.761 ], 00:05:30.761 "product_name": "Malloc disk", 00:05:30.761 "block_size": 512, 00:05:30.761 "num_blocks": 16384, 00:05:30.761 "uuid": "a3a50aa0-f92e-4435-a434-f022ea553361", 00:05:30.761 "assigned_rate_limits": { 00:05:30.761 "rw_ios_per_sec": 0, 00:05:30.761 "rw_mbytes_per_sec": 0, 00:05:30.761 "r_mbytes_per_sec": 0, 00:05:30.761 "w_mbytes_per_sec": 0 00:05:30.761 }, 00:05:30.761 "claimed": true, 00:05:30.761 "claim_type": "exclusive_write", 00:05:30.761 "zoned": false, 00:05:30.761 "supported_io_types": { 00:05:30.761 "read": true, 00:05:30.761 "write": true, 00:05:30.761 "unmap": true, 00:05:30.761 "write_zeroes": true, 00:05:30.761 "flush": true, 00:05:30.761 "reset": true, 00:05:30.761 "compare": false, 00:05:30.761 "compare_and_write": false, 00:05:30.761 "abort": true, 00:05:30.761 "nvme_admin": false, 00:05:30.761 "nvme_io": false 00:05:30.761 }, 00:05:30.761 "memory_domains": [ 00:05:30.761 { 00:05:30.761 "dma_device_id": "system", 00:05:30.761 "dma_device_type": 1 00:05:30.761 }, 00:05:30.761 { 00:05:30.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.761 "dma_device_type": 2 00:05:30.761 } 00:05:30.761 ], 00:05:30.761 "driver_specific": {} 00:05:30.761 }, 00:05:30.761 { 00:05:30.761 "name": "Passthru0", 00:05:30.761 "aliases": [ 00:05:30.761 "442de2b7-a119-5c75-9012-e0ffdbef4ef7" 00:05:30.761 ], 00:05:30.761 "product_name": "passthru", 00:05:30.761 "block_size": 512, 00:05:30.761 "num_blocks": 16384, 00:05:30.761 "uuid": "442de2b7-a119-5c75-9012-e0ffdbef4ef7", 00:05:30.761 "assigned_rate_limits": { 00:05:30.761 "rw_ios_per_sec": 0, 00:05:30.761 "rw_mbytes_per_sec": 0, 00:05:30.761 "r_mbytes_per_sec": 0, 00:05:30.761 "w_mbytes_per_sec": 0 00:05:30.761 }, 00:05:30.761 "claimed": false, 00:05:30.761 "zoned": false, 00:05:30.761 "supported_io_types": { 00:05:30.761 "read": true, 00:05:30.761 "write": true, 00:05:30.761 "unmap": true, 00:05:30.761 "write_zeroes": true, 00:05:30.761 "flush": true, 00:05:30.761 "reset": true, 00:05:30.761 "compare": false, 00:05:30.761 "compare_and_write": false, 00:05:30.761 "abort": true, 00:05:30.761 "nvme_admin": false, 00:05:30.761 "nvme_io": false 00:05:30.761 }, 00:05:30.761 "memory_domains": [ 00:05:30.761 { 00:05:30.761 "dma_device_id": "system", 00:05:30.761 "dma_device_type": 1 00:05:30.761 }, 00:05:30.761 { 00:05:30.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.761 "dma_device_type": 2 00:05:30.761 } 00:05:30.761 ], 00:05:30.761 "driver_specific": { 00:05:30.761 "passthru": { 00:05:30.761 "name": "Passthru0", 00:05:30.761 "base_bdev_name": "Malloc0" 00:05:30.761 } 00:05:30.761 } 00:05:30.761 } 00:05:30.761 ]' 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.761 18:38:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.761 00:05:30.761 real 0m0.230s 00:05:30.761 user 0m0.147s 00:05:30.761 sys 0m0.021s 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 ************************************ 00:05:30.761 END TEST rpc_integrity 00:05:30.761 ************************************ 00:05:30.761 18:38:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.761 18:38:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.761 18:38:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.761 18:38:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 ************************************ 00:05:30.761 START TEST rpc_plugins 00:05:30.761 ************************************ 00:05:30.761 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:30.761 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.761 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.761 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.761 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.761 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.761 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.761 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.761 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.761 { 00:05:30.761 "name": "Malloc1", 00:05:30.762 "aliases": [ 00:05:30.762 "6c847ee2-8b2c-4999-8d30-19bb67b25941" 00:05:30.762 ], 00:05:30.762 "product_name": "Malloc disk", 00:05:30.762 "block_size": 4096, 00:05:30.762 "num_blocks": 256, 00:05:30.762 "uuid": "6c847ee2-8b2c-4999-8d30-19bb67b25941", 00:05:30.762 "assigned_rate_limits": { 00:05:30.762 "rw_ios_per_sec": 0, 00:05:30.762 "rw_mbytes_per_sec": 0, 00:05:30.762 "r_mbytes_per_sec": 0, 00:05:30.762 "w_mbytes_per_sec": 0 00:05:30.762 }, 00:05:30.762 "claimed": false, 00:05:30.762 "zoned": false, 00:05:30.762 "supported_io_types": { 00:05:30.762 "read": true, 00:05:30.762 "write": true, 00:05:30.762 "unmap": true, 00:05:30.762 "write_zeroes": true, 00:05:30.762 "flush": true, 00:05:30.762 "reset": true, 00:05:30.762 "compare": false, 00:05:30.762 "compare_and_write": false, 00:05:30.762 "abort": true, 00:05:30.762 "nvme_admin": false, 00:05:30.762 "nvme_io": false 00:05:30.762 }, 00:05:30.762 "memory_domains": [ 00:05:30.762 { 00:05:30.762 "dma_device_id": "system", 00:05:30.762 "dma_device_type": 1 00:05:30.762 }, 00:05:30.762 { 00:05:30.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.762 "dma_device_type": 2 00:05:30.762 } 00:05:30.762 ], 00:05:30.762 "driver_specific": {} 00:05:30.762 } 00:05:30.762 ]' 00:05:30.762 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.019 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.019 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.019 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.019 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.019 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.019 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.019 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.019 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.019 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.019 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.019 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.019 18:38:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.019 00:05:31.019 real 0m0.110s 00:05:31.019 user 0m0.071s 00:05:31.019 sys 0m0.010s 00:05:31.019 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.019 18:38:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.019 ************************************ 00:05:31.020 END TEST rpc_plugins 00:05:31.020 ************************************ 00:05:31.020 18:38:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.020 18:38:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.020 18:38:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.020 18:38:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.020 ************************************ 00:05:31.020 START TEST rpc_trace_cmd_test 00:05:31.020 ************************************ 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:31.020 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3400473", 00:05:31.020 "tpoint_group_mask": "0x8", 00:05:31.020 "iscsi_conn": { 00:05:31.020 "mask": "0x2", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "scsi": { 00:05:31.020 "mask": "0x4", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "bdev": { 00:05:31.020 "mask": "0x8", 00:05:31.020 "tpoint_mask": "0xffffffffffffffff" 00:05:31.020 }, 00:05:31.020 "nvmf_rdma": { 00:05:31.020 "mask": "0x10", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "nvmf_tcp": { 00:05:31.020 "mask": "0x20", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "ftl": { 00:05:31.020 "mask": "0x40", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "blobfs": { 00:05:31.020 "mask": "0x80", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "dsa": { 00:05:31.020 "mask": "0x200", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "thread": { 00:05:31.020 "mask": "0x400", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "nvme_pcie": { 00:05:31.020 "mask": "0x800", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "iaa": { 00:05:31.020 "mask": "0x1000", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "nvme_tcp": { 00:05:31.020 "mask": "0x2000", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "bdev_nvme": { 00:05:31.020 "mask": "0x4000", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 }, 00:05:31.020 "sock": { 00:05:31.020 "mask": "0x8000", 00:05:31.020 "tpoint_mask": "0x0" 00:05:31.020 } 00:05:31.020 }' 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.020 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.278 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.278 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.278 18:38:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.278 00:05:31.278 real 0m0.193s 00:05:31.278 user 0m0.171s 00:05:31.278 sys 0m0.016s 00:05:31.278 18:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.278 18:38:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.278 ************************************ 00:05:31.278 END TEST rpc_trace_cmd_test 00:05:31.278 ************************************ 00:05:31.278 18:38:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:31.278 18:38:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:31.278 18:38:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:31.278 18:38:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.278 18:38:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.278 18:38:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.278 ************************************ 00:05:31.278 START TEST rpc_daemon_integrity 00:05:31.278 ************************************ 00:05:31.278 18:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:31.278 18:38:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.278 18:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.278 18:38:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.278 { 00:05:31.278 "name": "Malloc2", 00:05:31.278 "aliases": [ 00:05:31.278 "a7162971-ca2b-4112-a41c-2b4c14d964f5" 00:05:31.278 ], 00:05:31.278 "product_name": "Malloc disk", 00:05:31.278 "block_size": 512, 00:05:31.278 "num_blocks": 16384, 00:05:31.278 "uuid": "a7162971-ca2b-4112-a41c-2b4c14d964f5", 00:05:31.278 "assigned_rate_limits": { 00:05:31.278 "rw_ios_per_sec": 0, 00:05:31.278 "rw_mbytes_per_sec": 0, 00:05:31.278 "r_mbytes_per_sec": 0, 00:05:31.278 "w_mbytes_per_sec": 0 00:05:31.278 }, 00:05:31.278 "claimed": false, 00:05:31.278 "zoned": false, 00:05:31.278 "supported_io_types": { 00:05:31.278 "read": true, 00:05:31.278 "write": true, 00:05:31.278 "unmap": true, 00:05:31.278 "write_zeroes": true, 00:05:31.278 "flush": true, 00:05:31.278 "reset": true, 00:05:31.278 "compare": false, 00:05:31.278 "compare_and_write": false, 00:05:31.278 "abort": true, 00:05:31.278 "nvme_admin": false, 00:05:31.278 "nvme_io": false 00:05:31.278 }, 00:05:31.278 "memory_domains": [ 00:05:31.278 { 00:05:31.278 "dma_device_id": "system", 00:05:31.278 "dma_device_type": 1 00:05:31.278 }, 00:05:31.278 { 00:05:31.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.278 "dma_device_type": 2 00:05:31.278 } 00:05:31.278 ], 00:05:31.278 "driver_specific": {} 00:05:31.278 } 00:05:31.278 ]' 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.278 [2024-07-25 18:38:43.101279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.278 [2024-07-25 18:38:43.101320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.278 [2024-07-25 18:38:43.101356] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdef600 00:05:31.278 [2024-07-25 18:38:43.101369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.278 [2024-07-25 18:38:43.102835] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.278 [2024-07-25 18:38:43.102864] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.278 Passthru0 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.278 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.278 { 00:05:31.278 "name": "Malloc2", 00:05:31.278 "aliases": [ 00:05:31.278 "a7162971-ca2b-4112-a41c-2b4c14d964f5" 00:05:31.278 ], 00:05:31.278 "product_name": "Malloc disk", 00:05:31.278 "block_size": 512, 00:05:31.278 "num_blocks": 16384, 00:05:31.278 "uuid": "a7162971-ca2b-4112-a41c-2b4c14d964f5", 00:05:31.278 "assigned_rate_limits": { 00:05:31.278 "rw_ios_per_sec": 0, 00:05:31.278 "rw_mbytes_per_sec": 0, 00:05:31.278 "r_mbytes_per_sec": 0, 00:05:31.278 "w_mbytes_per_sec": 0 00:05:31.278 }, 00:05:31.278 "claimed": true, 00:05:31.278 "claim_type": "exclusive_write", 00:05:31.278 "zoned": false, 00:05:31.278 "supported_io_types": { 00:05:31.278 "read": true, 00:05:31.278 "write": true, 00:05:31.278 "unmap": true, 00:05:31.278 "write_zeroes": true, 00:05:31.278 "flush": true, 00:05:31.278 "reset": true, 00:05:31.278 "compare": false, 00:05:31.278 "compare_and_write": false, 00:05:31.278 "abort": true, 00:05:31.278 "nvme_admin": false, 00:05:31.278 "nvme_io": false 00:05:31.278 }, 00:05:31.278 "memory_domains": [ 00:05:31.278 { 00:05:31.278 "dma_device_id": "system", 00:05:31.278 "dma_device_type": 1 00:05:31.278 }, 00:05:31.278 { 00:05:31.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.278 "dma_device_type": 2 00:05:31.278 } 00:05:31.278 ], 00:05:31.278 "driver_specific": {} 00:05:31.278 }, 00:05:31.278 { 00:05:31.278 "name": "Passthru0", 00:05:31.278 "aliases": [ 00:05:31.278 "0f727e38-ae0d-5f72-83f3-abb9270dd7e1" 00:05:31.278 ], 00:05:31.278 "product_name": "passthru", 00:05:31.278 "block_size": 512, 00:05:31.278 "num_blocks": 16384, 00:05:31.278 "uuid": "0f727e38-ae0d-5f72-83f3-abb9270dd7e1", 00:05:31.278 "assigned_rate_limits": { 00:05:31.278 "rw_ios_per_sec": 0, 00:05:31.278 "rw_mbytes_per_sec": 0, 00:05:31.278 "r_mbytes_per_sec": 0, 00:05:31.278 "w_mbytes_per_sec": 0 00:05:31.278 }, 00:05:31.278 "claimed": false, 00:05:31.278 "zoned": false, 00:05:31.278 "supported_io_types": { 00:05:31.278 "read": true, 00:05:31.278 "write": true, 00:05:31.278 "unmap": true, 00:05:31.278 "write_zeroes": true, 00:05:31.278 "flush": true, 00:05:31.278 "reset": true, 00:05:31.278 "compare": false, 00:05:31.278 "compare_and_write": false, 00:05:31.278 "abort": true, 00:05:31.278 "nvme_admin": false, 00:05:31.278 "nvme_io": false 00:05:31.278 }, 00:05:31.278 "memory_domains": [ 00:05:31.279 { 00:05:31.279 "dma_device_id": "system", 00:05:31.279 "dma_device_type": 1 00:05:31.279 }, 00:05:31.279 { 00:05:31.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.279 "dma_device_type": 2 00:05:31.279 } 00:05:31.279 ], 00:05:31.279 "driver_specific": { 00:05:31.279 "passthru": { 00:05:31.279 "name": "Passthru0", 00:05:31.279 "base_bdev_name": "Malloc2" 00:05:31.279 } 00:05:31.279 } 00:05:31.279 } 00:05:31.279 ]' 00:05:31.279 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.536 00:05:31.536 real 0m0.224s 00:05:31.536 user 0m0.149s 00:05:31.536 sys 0m0.023s 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.536 18:38:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.536 ************************************ 00:05:31.536 END TEST rpc_daemon_integrity 00:05:31.536 ************************************ 00:05:31.536 18:38:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.536 18:38:43 rpc -- rpc/rpc.sh@84 -- # killprocess 3400473 00:05:31.536 18:38:43 rpc -- common/autotest_common.sh@946 -- # '[' -z 3400473 ']' 00:05:31.536 18:38:43 rpc -- common/autotest_common.sh@950 -- # kill -0 3400473 00:05:31.536 18:38:43 rpc -- common/autotest_common.sh@951 -- # uname 00:05:31.536 18:38:43 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.536 18:38:43 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3400473 00:05:31.537 18:38:43 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:31.537 18:38:43 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:31.537 18:38:43 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3400473' 00:05:31.537 killing process with pid 3400473 00:05:31.537 18:38:43 rpc -- common/autotest_common.sh@965 -- # kill 3400473 00:05:31.537 18:38:43 rpc -- common/autotest_common.sh@970 -- # wait 3400473 00:05:31.793 00:05:31.793 real 0m1.868s 00:05:31.793 user 0m2.371s 00:05:31.793 sys 0m0.570s 00:05:31.793 18:38:43 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.793 18:38:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.793 ************************************ 00:05:31.793 END TEST rpc 00:05:31.793 ************************************ 00:05:32.051 18:38:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:32.051 18:38:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.051 18:38:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.051 18:38:43 -- common/autotest_common.sh@10 -- # set +x 00:05:32.051 ************************************ 00:05:32.051 START TEST skip_rpc 00:05:32.051 ************************************ 00:05:32.051 18:38:43 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:32.051 * Looking for test storage... 00:05:32.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.051 18:38:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:32.051 18:38:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:32.051 18:38:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:32.051 18:38:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.051 18:38:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.051 18:38:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.051 ************************************ 00:05:32.051 START TEST skip_rpc 00:05:32.051 ************************************ 00:05:32.051 18:38:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:32.051 18:38:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3400910 00:05:32.051 18:38:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:32.051 18:38:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.051 18:38:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:32.051 [2024-07-25 18:38:43.844875] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:32.051 [2024-07-25 18:38:43.844950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400910 ] 00:05:32.051 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.051 [2024-07-25 18:38:43.904311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.309 [2024-07-25 18:38:43.994150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3400910 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3400910 ']' 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3400910 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3400910 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3400910' 00:05:37.568 killing process with pid 3400910 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3400910 00:05:37.568 18:38:48 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3400910 00:05:37.568 00:05:37.568 real 0m5.431s 00:05:37.568 user 0m5.119s 00:05:37.568 sys 0m0.316s 00:05:37.568 18:38:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.568 18:38:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.568 ************************************ 00:05:37.568 END TEST skip_rpc 00:05:37.568 ************************************ 00:05:37.568 18:38:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:37.568 18:38:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.568 18:38:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.568 18:38:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.568 ************************************ 00:05:37.568 START TEST skip_rpc_with_json 00:05:37.568 ************************************ 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3401596 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3401596 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3401596 ']' 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.568 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.568 [2024-07-25 18:38:49.321991] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:37.568 [2024-07-25 18:38:49.322102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401596 ] 00:05:37.568 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.568 [2024-07-25 18:38:49.386484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.827 [2024-07-25 18:38:49.482514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.084 [2024-07-25 18:38:49.744391] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.084 request: 00:05:38.084 { 00:05:38.084 "trtype": "tcp", 00:05:38.084 "method": "nvmf_get_transports", 00:05:38.084 "req_id": 1 00:05:38.084 } 00:05:38.084 Got JSON-RPC error response 00:05:38.084 response: 00:05:38.084 { 00:05:38.084 "code": -19, 00:05:38.084 "message": "No such device" 00:05:38.084 } 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.084 [2024-07-25 18:38:49.752512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.084 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:38.084 { 00:05:38.084 "subsystems": [ 00:05:38.084 { 00:05:38.084 "subsystem": "vfio_user_target", 00:05:38.084 "config": null 00:05:38.084 }, 00:05:38.084 { 00:05:38.084 "subsystem": "keyring", 00:05:38.084 "config": [] 00:05:38.084 }, 00:05:38.084 { 00:05:38.084 "subsystem": "iobuf", 00:05:38.084 "config": [ 00:05:38.084 { 00:05:38.084 "method": "iobuf_set_options", 00:05:38.084 "params": { 00:05:38.084 "small_pool_count": 8192, 00:05:38.084 "large_pool_count": 1024, 00:05:38.084 "small_bufsize": 8192, 00:05:38.084 "large_bufsize": 135168 00:05:38.084 } 00:05:38.084 } 00:05:38.084 ] 00:05:38.084 }, 00:05:38.084 { 00:05:38.084 "subsystem": "sock", 00:05:38.084 "config": [ 00:05:38.084 { 00:05:38.084 "method": "sock_set_default_impl", 00:05:38.084 "params": { 00:05:38.084 "impl_name": "posix" 00:05:38.084 } 00:05:38.084 }, 00:05:38.084 { 00:05:38.084 "method": "sock_impl_set_options", 00:05:38.084 "params": { 00:05:38.084 "impl_name": "ssl", 00:05:38.084 "recv_buf_size": 4096, 00:05:38.084 "send_buf_size": 4096, 00:05:38.084 "enable_recv_pipe": true, 00:05:38.084 "enable_quickack": false, 00:05:38.084 "enable_placement_id": 0, 00:05:38.084 "enable_zerocopy_send_server": true, 00:05:38.084 "enable_zerocopy_send_client": false, 00:05:38.084 "zerocopy_threshold": 0, 00:05:38.084 "tls_version": 0, 00:05:38.084 "enable_ktls": false 00:05:38.084 } 00:05:38.084 }, 00:05:38.084 { 00:05:38.084 "method": "sock_impl_set_options", 00:05:38.084 "params": { 00:05:38.084 "impl_name": "posix", 00:05:38.084 "recv_buf_size": 2097152, 00:05:38.084 "send_buf_size": 2097152, 00:05:38.084 "enable_recv_pipe": true, 00:05:38.084 "enable_quickack": false, 00:05:38.084 "enable_placement_id": 0, 00:05:38.084 "enable_zerocopy_send_server": true, 00:05:38.084 "enable_zerocopy_send_client": false, 00:05:38.084 "zerocopy_threshold": 0, 00:05:38.084 "tls_version": 0, 00:05:38.084 "enable_ktls": false 00:05:38.084 } 00:05:38.084 } 00:05:38.084 ] 00:05:38.084 }, 00:05:38.084 { 00:05:38.084 "subsystem": "vmd", 00:05:38.084 "config": [] 00:05:38.084 }, 00:05:38.084 { 00:05:38.084 "subsystem": "accel", 00:05:38.084 "config": [ 00:05:38.084 { 00:05:38.084 "method": "accel_set_options", 00:05:38.084 "params": { 00:05:38.084 "small_cache_size": 128, 00:05:38.084 "large_cache_size": 16, 00:05:38.084 "task_count": 2048, 00:05:38.084 "sequence_count": 2048, 00:05:38.084 "buf_count": 2048 00:05:38.084 } 00:05:38.084 } 00:05:38.084 ] 00:05:38.084 }, 00:05:38.084 { 00:05:38.084 "subsystem": "bdev", 00:05:38.084 "config": [ 00:05:38.084 { 00:05:38.084 "method": "bdev_set_options", 00:05:38.084 "params": { 00:05:38.084 "bdev_io_pool_size": 65535, 00:05:38.084 "bdev_io_cache_size": 256, 00:05:38.084 "bdev_auto_examine": true, 00:05:38.084 "iobuf_small_cache_size": 128, 00:05:38.084 "iobuf_large_cache_size": 16 00:05:38.085 } 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "method": "bdev_raid_set_options", 00:05:38.085 "params": { 00:05:38.085 "process_window_size_kb": 1024 00:05:38.085 } 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "method": "bdev_iscsi_set_options", 00:05:38.085 "params": { 00:05:38.085 "timeout_sec": 30 00:05:38.085 } 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "method": "bdev_nvme_set_options", 00:05:38.085 "params": { 00:05:38.085 "action_on_timeout": "none", 00:05:38.085 "timeout_us": 0, 00:05:38.085 "timeout_admin_us": 0, 00:05:38.085 "keep_alive_timeout_ms": 10000, 00:05:38.085 "arbitration_burst": 0, 00:05:38.085 "low_priority_weight": 0, 00:05:38.085 "medium_priority_weight": 0, 00:05:38.085 "high_priority_weight": 0, 00:05:38.085 "nvme_adminq_poll_period_us": 10000, 00:05:38.085 "nvme_ioq_poll_period_us": 0, 00:05:38.085 "io_queue_requests": 0, 00:05:38.085 "delay_cmd_submit": true, 00:05:38.085 "transport_retry_count": 4, 00:05:38.085 "bdev_retry_count": 3, 00:05:38.085 "transport_ack_timeout": 0, 00:05:38.085 "ctrlr_loss_timeout_sec": 0, 00:05:38.085 "reconnect_delay_sec": 0, 00:05:38.085 "fast_io_fail_timeout_sec": 0, 00:05:38.085 "disable_auto_failback": false, 00:05:38.085 "generate_uuids": false, 00:05:38.085 "transport_tos": 0, 00:05:38.085 "nvme_error_stat": false, 00:05:38.085 "rdma_srq_size": 0, 00:05:38.085 "io_path_stat": false, 00:05:38.085 "allow_accel_sequence": false, 00:05:38.085 "rdma_max_cq_size": 0, 00:05:38.085 "rdma_cm_event_timeout_ms": 0, 00:05:38.085 "dhchap_digests": [ 00:05:38.085 "sha256", 00:05:38.085 "sha384", 00:05:38.085 "sha512" 00:05:38.085 ], 00:05:38.085 "dhchap_dhgroups": [ 00:05:38.085 "null", 00:05:38.085 "ffdhe2048", 00:05:38.085 "ffdhe3072", 00:05:38.085 "ffdhe4096", 00:05:38.085 "ffdhe6144", 00:05:38.085 "ffdhe8192" 00:05:38.085 ] 00:05:38.085 } 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "method": "bdev_nvme_set_hotplug", 00:05:38.085 "params": { 00:05:38.085 "period_us": 100000, 00:05:38.085 "enable": false 00:05:38.085 } 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "method": "bdev_wait_for_examine" 00:05:38.085 } 00:05:38.085 ] 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "subsystem": "scsi", 00:05:38.085 "config": null 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "subsystem": "scheduler", 00:05:38.085 "config": [ 00:05:38.085 { 00:05:38.085 "method": "framework_set_scheduler", 00:05:38.085 "params": { 00:05:38.085 "name": "static" 00:05:38.085 } 00:05:38.085 } 00:05:38.085 ] 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "subsystem": "vhost_scsi", 00:05:38.085 "config": [] 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "subsystem": "vhost_blk", 00:05:38.085 "config": [] 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "subsystem": "ublk", 00:05:38.085 "config": [] 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "subsystem": "nbd", 00:05:38.085 "config": [] 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "subsystem": "nvmf", 00:05:38.085 "config": [ 00:05:38.085 { 00:05:38.085 "method": "nvmf_set_config", 00:05:38.085 "params": { 00:05:38.085 "discovery_filter": "match_any", 00:05:38.085 "admin_cmd_passthru": { 00:05:38.085 "identify_ctrlr": false 00:05:38.085 } 00:05:38.085 } 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "method": "nvmf_set_max_subsystems", 00:05:38.085 "params": { 00:05:38.085 "max_subsystems": 1024 00:05:38.085 } 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "method": "nvmf_set_crdt", 00:05:38.085 "params": { 00:05:38.085 "crdt1": 0, 00:05:38.085 "crdt2": 0, 00:05:38.085 "crdt3": 0 00:05:38.085 } 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "method": "nvmf_create_transport", 00:05:38.085 "params": { 00:05:38.085 "trtype": "TCP", 00:05:38.085 "max_queue_depth": 128, 00:05:38.085 "max_io_qpairs_per_ctrlr": 127, 00:05:38.085 "in_capsule_data_size": 4096, 00:05:38.085 "max_io_size": 131072, 00:05:38.085 "io_unit_size": 131072, 00:05:38.085 "max_aq_depth": 128, 00:05:38.085 "num_shared_buffers": 511, 00:05:38.085 "buf_cache_size": 4294967295, 00:05:38.085 "dif_insert_or_strip": false, 00:05:38.085 "zcopy": false, 00:05:38.085 "c2h_success": true, 00:05:38.085 "sock_priority": 0, 00:05:38.085 "abort_timeout_sec": 1, 00:05:38.085 "ack_timeout": 0, 00:05:38.085 "data_wr_pool_size": 0 00:05:38.085 } 00:05:38.085 } 00:05:38.085 ] 00:05:38.085 }, 00:05:38.085 { 00:05:38.085 "subsystem": "iscsi", 00:05:38.085 "config": [ 00:05:38.085 { 00:05:38.085 "method": "iscsi_set_options", 00:05:38.085 "params": { 00:05:38.085 "node_base": "iqn.2016-06.io.spdk", 00:05:38.085 "max_sessions": 128, 00:05:38.085 "max_connections_per_session": 2, 00:05:38.085 "max_queue_depth": 64, 00:05:38.085 "default_time2wait": 2, 00:05:38.085 "default_time2retain": 20, 00:05:38.085 "first_burst_length": 8192, 00:05:38.085 "immediate_data": true, 00:05:38.085 "allow_duplicated_isid": false, 00:05:38.085 "error_recovery_level": 0, 00:05:38.085 "nop_timeout": 60, 00:05:38.085 "nop_in_interval": 30, 00:05:38.085 "disable_chap": false, 00:05:38.085 "require_chap": false, 00:05:38.085 "mutual_chap": false, 00:05:38.085 "chap_group": 0, 00:05:38.085 "max_large_datain_per_connection": 64, 00:05:38.085 "max_r2t_per_connection": 4, 00:05:38.085 "pdu_pool_size": 36864, 00:05:38.085 "immediate_data_pool_size": 16384, 00:05:38.085 "data_out_pool_size": 2048 00:05:38.085 } 00:05:38.085 } 00:05:38.085 ] 00:05:38.085 } 00:05:38.085 ] 00:05:38.085 } 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3401596 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3401596 ']' 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3401596 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3401596 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3401596' 00:05:38.085 killing process with pid 3401596 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3401596 00:05:38.085 18:38:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3401596 00:05:38.650 18:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3401736 00:05:38.650 18:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.650 18:38:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3401736 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3401736 ']' 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3401736 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3401736 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3401736' 00:05:43.913 killing process with pid 3401736 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3401736 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3401736 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.913 00:05:43.913 real 0m6.502s 00:05:43.913 user 0m6.104s 00:05:43.913 sys 0m0.707s 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.913 18:38:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.913 ************************************ 00:05:43.913 END TEST skip_rpc_with_json 00:05:43.913 ************************************ 00:05:44.172 18:38:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:44.172 18:38:55 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.172 18:38:55 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.172 18:38:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.172 ************************************ 00:05:44.172 START TEST skip_rpc_with_delay 00:05:44.172 ************************************ 00:05:44.172 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.173 [2024-07-25 18:38:55.873823] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:44.173 [2024-07-25 18:38:55.873942] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.173 00:05:44.173 real 0m0.068s 00:05:44.173 user 0m0.041s 00:05:44.173 sys 0m0.027s 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.173 18:38:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:44.173 ************************************ 00:05:44.173 END TEST skip_rpc_with_delay 00:05:44.173 ************************************ 00:05:44.173 18:38:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:44.173 18:38:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:44.173 18:38:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:44.173 18:38:55 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.173 18:38:55 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.173 18:38:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.173 ************************************ 00:05:44.173 START TEST exit_on_failed_rpc_init 00:05:44.173 ************************************ 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3402450 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3402450 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3402450 ']' 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.173 18:38:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.173 [2024-07-25 18:38:55.986400] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:44.173 [2024-07-25 18:38:55.986502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402450 ] 00:05:44.173 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.173 [2024-07-25 18:38:56.043897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.432 [2024-07-25 18:38:56.132987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:44.692 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.692 [2024-07-25 18:38:56.437538] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:44.692 [2024-07-25 18:38:56.437614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402462 ] 00:05:44.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.692 [2024-07-25 18:38:56.501258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.961 [2024-07-25 18:38:56.596064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.961 [2024-07-25 18:38:56.596204] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:44.961 [2024-07-25 18:38:56.596223] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:44.961 [2024-07-25 18:38:56.596235] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3402450 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3402450 ']' 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3402450 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3402450 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3402450' 00:05:44.961 killing process with pid 3402450 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3402450 00:05:44.961 18:38:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3402450 00:05:45.530 00:05:45.530 real 0m1.185s 00:05:45.530 user 0m1.289s 00:05:45.530 sys 0m0.461s 00:05:45.530 18:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.530 18:38:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.530 ************************************ 00:05:45.530 END TEST exit_on_failed_rpc_init 00:05:45.530 ************************************ 00:05:45.530 18:38:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:45.530 00:05:45.530 real 0m13.431s 00:05:45.530 user 0m12.651s 00:05:45.530 sys 0m1.672s 00:05:45.530 18:38:57 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.530 18:38:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.530 ************************************ 00:05:45.530 END TEST skip_rpc 00:05:45.530 ************************************ 00:05:45.530 18:38:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.530 18:38:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.530 18:38:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.530 18:38:57 -- common/autotest_common.sh@10 -- # set +x 00:05:45.530 ************************************ 00:05:45.530 START TEST rpc_client 00:05:45.530 ************************************ 00:05:45.530 18:38:57 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.530 * Looking for test storage... 00:05:45.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:45.530 18:38:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:45.530 OK 00:05:45.530 18:38:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.530 00:05:45.530 real 0m0.062s 00:05:45.530 user 0m0.026s 00:05:45.530 sys 0m0.041s 00:05:45.530 18:38:57 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.530 18:38:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:45.530 ************************************ 00:05:45.530 END TEST rpc_client 00:05:45.530 ************************************ 00:05:45.530 18:38:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.530 18:38:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.530 18:38:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.530 18:38:57 -- common/autotest_common.sh@10 -- # set +x 00:05:45.530 ************************************ 00:05:45.530 START TEST json_config 00:05:45.530 ************************************ 00:05:45.530 18:38:57 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.530 18:38:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.530 18:38:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.530 18:38:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.530 18:38:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.530 18:38:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.530 18:38:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.530 18:38:57 json_config -- paths/export.sh@5 -- # export PATH 00:05:45.530 18:38:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@47 -- # : 0 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.530 18:38:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:45.530 INFO: JSON configuration test init 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:45.530 18:38:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.530 18:38:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:45.530 18:38:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.530 18:38:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.530 18:38:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:45.530 18:38:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:45.530 18:38:57 json_config -- json_config/common.sh@10 -- # shift 00:05:45.530 18:38:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.530 18:38:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.530 18:38:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.530 18:38:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.531 18:38:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.531 18:38:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3402704 00:05:45.531 18:38:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:45.531 18:38:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.531 Waiting for target to run... 00:05:45.531 18:38:57 json_config -- json_config/common.sh@25 -- # waitforlisten 3402704 /var/tmp/spdk_tgt.sock 00:05:45.531 18:38:57 json_config -- common/autotest_common.sh@827 -- # '[' -z 3402704 ']' 00:05:45.531 18:38:57 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.531 18:38:57 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.531 18:38:57 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.531 18:38:57 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.531 18:38:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.790 [2024-07-25 18:38:57.407332] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:45.790 [2024-07-25 18:38:57.407460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402704 ] 00:05:45.790 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.049 [2024-07-25 18:38:57.747040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.049 [2024-07-25 18:38:57.810024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.618 18:38:58 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.618 18:38:58 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:46.618 18:38:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:46.618 00:05:46.618 18:38:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:46.618 18:38:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:46.618 18:38:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:46.618 18:38:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.618 18:38:58 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:46.618 18:38:58 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:46.618 18:38:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.618 18:38:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.618 18:38:58 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:46.618 18:38:58 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:46.618 18:38:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:49.914 18:39:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.914 18:39:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:49.914 18:39:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:49.914 18:39:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.914 18:39:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:49.914 18:39:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.914 18:39:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:49.914 18:39:01 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.914 18:39:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.173 MallocForNvmf0 00:05:50.173 18:39:02 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.173 18:39:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.431 MallocForNvmf1 00:05:50.431 18:39:02 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.431 18:39:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.689 [2024-07-25 18:39:02.496635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.689 18:39:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.689 18:39:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.948 18:39:02 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.948 18:39:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.207 18:39:02 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.207 18:39:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.467 18:39:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.467 18:39:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.725 [2024-07-25 18:39:03.467804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:51.725 18:39:03 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:51.725 18:39:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.725 18:39:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.725 18:39:03 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:51.725 18:39:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.725 18:39:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.725 18:39:03 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:51.725 18:39:03 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.725 18:39:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.983 MallocBdevForConfigChangeCheck 00:05:51.983 18:39:03 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:51.983 18:39:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.983 18:39:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.983 18:39:03 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:51.983 18:39:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.552 18:39:04 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:52.552 INFO: shutting down applications... 00:05:52.552 18:39:04 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:52.552 18:39:04 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:52.552 18:39:04 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:52.552 18:39:04 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:53.930 Calling clear_iscsi_subsystem 00:05:53.930 Calling clear_nvmf_subsystem 00:05:53.930 Calling clear_nbd_subsystem 00:05:53.930 Calling clear_ublk_subsystem 00:05:53.930 Calling clear_vhost_blk_subsystem 00:05:53.930 Calling clear_vhost_scsi_subsystem 00:05:53.930 Calling clear_bdev_subsystem 00:05:53.930 18:39:05 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:53.930 18:39:05 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:53.930 18:39:05 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:53.930 18:39:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.930 18:39:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:53.930 18:39:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:54.500 18:39:06 json_config -- json_config/json_config.sh@345 -- # break 00:05:54.500 18:39:06 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:54.500 18:39:06 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:54.500 18:39:06 json_config -- json_config/common.sh@31 -- # local app=target 00:05:54.500 18:39:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:54.500 18:39:06 json_config -- json_config/common.sh@35 -- # [[ -n 3402704 ]] 00:05:54.500 18:39:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3402704 00:05:54.500 18:39:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:54.500 18:39:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.500 18:39:06 json_config -- json_config/common.sh@41 -- # kill -0 3402704 00:05:54.500 18:39:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:55.068 18:39:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:55.068 18:39:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.068 18:39:06 json_config -- json_config/common.sh@41 -- # kill -0 3402704 00:05:55.068 18:39:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:55.068 18:39:06 json_config -- json_config/common.sh@43 -- # break 00:05:55.068 18:39:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:55.068 18:39:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:55.068 SPDK target shutdown done 00:05:55.068 18:39:06 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:55.068 INFO: relaunching applications... 00:05:55.068 18:39:06 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.068 18:39:06 json_config -- json_config/common.sh@9 -- # local app=target 00:05:55.068 18:39:06 json_config -- json_config/common.sh@10 -- # shift 00:05:55.068 18:39:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.068 18:39:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.068 18:39:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.068 18:39:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.068 18:39:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.068 18:39:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3404010 00:05:55.068 18:39:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.068 18:39:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.068 Waiting for target to run... 00:05:55.068 18:39:06 json_config -- json_config/common.sh@25 -- # waitforlisten 3404010 /var/tmp/spdk_tgt.sock 00:05:55.068 18:39:06 json_config -- common/autotest_common.sh@827 -- # '[' -z 3404010 ']' 00:05:55.068 18:39:06 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.068 18:39:06 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.068 18:39:06 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.068 18:39:06 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.069 18:39:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.069 [2024-07-25 18:39:06.719306] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:55.069 [2024-07-25 18:39:06.719431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404010 ] 00:05:55.069 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.327 [2024-07-25 18:39:07.064021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.327 [2024-07-25 18:39:07.127889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.613 [2024-07-25 18:39:10.162035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.613 [2024-07-25 18:39:10.194476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:58.613 18:39:10 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.613 18:39:10 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:58.613 18:39:10 json_config -- json_config/common.sh@26 -- # echo '' 00:05:58.613 00:05:58.613 18:39:10 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:58.613 18:39:10 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:58.613 INFO: Checking if target configuration is the same... 00:05:58.613 18:39:10 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.613 18:39:10 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:58.613 18:39:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.613 + '[' 2 -ne 2 ']' 00:05:58.613 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:58.613 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:58.613 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.613 +++ basename /dev/fd/62 00:05:58.613 ++ mktemp /tmp/62.XXX 00:05:58.613 + tmp_file_1=/tmp/62.8dE 00:05:58.613 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.613 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:58.613 + tmp_file_2=/tmp/spdk_tgt_config.json.0iN 00:05:58.613 + ret=0 00:05:58.613 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:58.873 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:58.873 + diff -u /tmp/62.8dE /tmp/spdk_tgt_config.json.0iN 00:05:58.873 + echo 'INFO: JSON config files are the same' 00:05:58.873 INFO: JSON config files are the same 00:05:58.873 + rm /tmp/62.8dE /tmp/spdk_tgt_config.json.0iN 00:05:58.873 + exit 0 00:05:58.873 18:39:10 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:58.873 18:39:10 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:58.873 INFO: changing configuration and checking if this can be detected... 00:05:58.873 18:39:10 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:58.873 18:39:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.132 18:39:10 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.132 18:39:10 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:59.132 18:39:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.132 + '[' 2 -ne 2 ']' 00:05:59.132 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:59.132 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:59.132 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.132 +++ basename /dev/fd/62 00:05:59.132 ++ mktemp /tmp/62.XXX 00:05:59.132 + tmp_file_1=/tmp/62.jVK 00:05:59.132 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.132 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:59.132 + tmp_file_2=/tmp/spdk_tgt_config.json.WB7 00:05:59.132 + ret=0 00:05:59.132 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.700 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.700 + diff -u /tmp/62.jVK /tmp/spdk_tgt_config.json.WB7 00:05:59.700 + ret=1 00:05:59.700 + echo '=== Start of file: /tmp/62.jVK ===' 00:05:59.700 + cat /tmp/62.jVK 00:05:59.700 + echo '=== End of file: /tmp/62.jVK ===' 00:05:59.700 + echo '' 00:05:59.700 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WB7 ===' 00:05:59.700 + cat /tmp/spdk_tgt_config.json.WB7 00:05:59.700 + echo '=== End of file: /tmp/spdk_tgt_config.json.WB7 ===' 00:05:59.700 + echo '' 00:05:59.700 + rm /tmp/62.jVK /tmp/spdk_tgt_config.json.WB7 00:05:59.700 + exit 1 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:59.700 INFO: configuration change detected. 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@317 -- # [[ -n 3404010 ]] 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.700 18:39:11 json_config -- json_config/json_config.sh@323 -- # killprocess 3404010 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@946 -- # '[' -z 3404010 ']' 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@950 -- # kill -0 3404010 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@951 -- # uname 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3404010 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3404010' 00:05:59.700 killing process with pid 3404010 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@965 -- # kill 3404010 00:05:59.700 18:39:11 json_config -- common/autotest_common.sh@970 -- # wait 3404010 00:06:01.608 18:39:13 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.608 18:39:13 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:01.608 18:39:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.608 18:39:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.608 18:39:13 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:01.608 18:39:13 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:01.608 INFO: Success 00:06:01.608 00:06:01.608 real 0m15.744s 00:06:01.608 user 0m17.657s 00:06:01.608 sys 0m1.819s 00:06:01.608 18:39:13 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.608 18:39:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.608 ************************************ 00:06:01.608 END TEST json_config 00:06:01.608 ************************************ 00:06:01.608 18:39:13 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:01.608 18:39:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.608 18:39:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.608 18:39:13 -- common/autotest_common.sh@10 -- # set +x 00:06:01.608 ************************************ 00:06:01.608 START TEST json_config_extra_key 00:06:01.608 ************************************ 00:06:01.608 18:39:13 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.608 18:39:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.608 18:39:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.608 18:39:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.608 18:39:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.608 18:39:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.608 18:39:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.608 18:39:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:01.608 18:39:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.608 18:39:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:01.608 INFO: launching applications... 00:06:01.608 18:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3405419 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.609 Waiting for target to run... 00:06:01.609 18:39:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3405419 /var/tmp/spdk_tgt.sock 00:06:01.609 18:39:13 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3405419 ']' 00:06:01.609 18:39:13 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.609 18:39:13 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.609 18:39:13 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.609 18:39:13 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.609 18:39:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.609 [2024-07-25 18:39:13.193016] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:01.609 [2024-07-25 18:39:13.193104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405419 ] 00:06:01.609 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.867 [2024-07-25 18:39:13.696604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.125 [2024-07-25 18:39:13.777108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.385 18:39:14 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.385 18:39:14 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:02.385 00:06:02.385 18:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:02.385 INFO: shutting down applications... 00:06:02.385 18:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3405419 ]] 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3405419 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3405419 00:06:02.385 18:39:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.952 18:39:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.952 18:39:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.952 18:39:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3405419 00:06:02.952 18:39:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.952 18:39:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:02.952 18:39:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.952 18:39:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.952 SPDK target shutdown done 00:06:02.952 18:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:02.952 Success 00:06:02.952 00:06:02.952 real 0m1.581s 00:06:02.952 user 0m1.399s 00:06:02.952 sys 0m0.587s 00:06:02.952 18:39:14 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.952 18:39:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.952 ************************************ 00:06:02.952 END TEST json_config_extra_key 00:06:02.952 ************************************ 00:06:02.952 18:39:14 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.952 18:39:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.952 18:39:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.952 18:39:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.952 ************************************ 00:06:02.952 START TEST alias_rpc 00:06:02.952 ************************************ 00:06:02.952 18:39:14 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.952 * Looking for test storage... 00:06:02.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:02.952 18:39:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.952 18:39:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3405647 00:06:02.952 18:39:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.952 18:39:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3405647 00:06:02.952 18:39:14 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3405647 ']' 00:06:02.952 18:39:14 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.952 18:39:14 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.952 18:39:14 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.952 18:39:14 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.952 18:39:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.952 [2024-07-25 18:39:14.818727] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:02.952 [2024-07-25 18:39:14.818826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405647 ] 00:06:03.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.211 [2024-07-25 18:39:14.883139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.211 [2024-07-25 18:39:14.974977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.469 18:39:15 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.469 18:39:15 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:03.469 18:39:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:03.726 18:39:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3405647 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3405647 ']' 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3405647 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3405647 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3405647' 00:06:03.726 killing process with pid 3405647 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@965 -- # kill 3405647 00:06:03.726 18:39:15 alias_rpc -- common/autotest_common.sh@970 -- # wait 3405647 00:06:04.292 00:06:04.292 real 0m1.212s 00:06:04.292 user 0m1.293s 00:06:04.292 sys 0m0.446s 00:06:04.292 18:39:15 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.292 18:39:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.292 ************************************ 00:06:04.292 END TEST alias_rpc 00:06:04.292 ************************************ 00:06:04.292 18:39:15 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:04.292 18:39:15 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:04.292 18:39:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.292 18:39:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.292 18:39:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.292 ************************************ 00:06:04.292 START TEST spdkcli_tcp 00:06:04.292 ************************************ 00:06:04.292 18:39:15 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:04.292 * Looking for test storage... 00:06:04.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:04.292 18:39:16 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.292 18:39:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3405918 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:04.292 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3405918 00:06:04.292 18:39:16 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3405918 ']' 00:06:04.292 18:39:16 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.292 18:39:16 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.292 18:39:16 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.292 18:39:16 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.292 18:39:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.292 [2024-07-25 18:39:16.078176] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:04.292 [2024-07-25 18:39:16.078258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405918 ] 00:06:04.292 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.292 [2024-07-25 18:39:16.134025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.552 [2024-07-25 18:39:16.219969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.552 [2024-07-25 18:39:16.219973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.810 18:39:16 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.810 18:39:16 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:04.810 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3405922 00:06:04.810 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:04.810 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:05.070 [ 00:06:05.070 "bdev_malloc_delete", 00:06:05.070 "bdev_malloc_create", 00:06:05.070 "bdev_null_resize", 00:06:05.070 "bdev_null_delete", 00:06:05.070 "bdev_null_create", 00:06:05.070 "bdev_nvme_cuse_unregister", 00:06:05.070 "bdev_nvme_cuse_register", 00:06:05.070 "bdev_opal_new_user", 00:06:05.070 "bdev_opal_set_lock_state", 00:06:05.070 "bdev_opal_delete", 00:06:05.070 "bdev_opal_get_info", 00:06:05.070 "bdev_opal_create", 00:06:05.070 "bdev_nvme_opal_revert", 00:06:05.070 "bdev_nvme_opal_init", 00:06:05.070 "bdev_nvme_send_cmd", 00:06:05.070 "bdev_nvme_get_path_iostat", 00:06:05.070 "bdev_nvme_get_mdns_discovery_info", 00:06:05.070 "bdev_nvme_stop_mdns_discovery", 00:06:05.070 "bdev_nvme_start_mdns_discovery", 00:06:05.070 "bdev_nvme_set_multipath_policy", 00:06:05.070 "bdev_nvme_set_preferred_path", 00:06:05.070 "bdev_nvme_get_io_paths", 00:06:05.070 "bdev_nvme_remove_error_injection", 00:06:05.070 "bdev_nvme_add_error_injection", 00:06:05.070 "bdev_nvme_get_discovery_info", 00:06:05.070 "bdev_nvme_stop_discovery", 00:06:05.070 "bdev_nvme_start_discovery", 00:06:05.070 "bdev_nvme_get_controller_health_info", 00:06:05.070 "bdev_nvme_disable_controller", 00:06:05.070 "bdev_nvme_enable_controller", 00:06:05.070 "bdev_nvme_reset_controller", 00:06:05.070 "bdev_nvme_get_transport_statistics", 00:06:05.070 "bdev_nvme_apply_firmware", 00:06:05.070 "bdev_nvme_detach_controller", 00:06:05.070 "bdev_nvme_get_controllers", 00:06:05.070 "bdev_nvme_attach_controller", 00:06:05.070 "bdev_nvme_set_hotplug", 00:06:05.070 "bdev_nvme_set_options", 00:06:05.070 "bdev_passthru_delete", 00:06:05.070 "bdev_passthru_create", 00:06:05.070 "bdev_lvol_set_parent_bdev", 00:06:05.070 "bdev_lvol_set_parent", 00:06:05.070 "bdev_lvol_check_shallow_copy", 00:06:05.070 "bdev_lvol_start_shallow_copy", 00:06:05.070 "bdev_lvol_grow_lvstore", 00:06:05.070 "bdev_lvol_get_lvols", 00:06:05.070 "bdev_lvol_get_lvstores", 00:06:05.070 "bdev_lvol_delete", 00:06:05.070 "bdev_lvol_set_read_only", 00:06:05.070 "bdev_lvol_resize", 00:06:05.070 "bdev_lvol_decouple_parent", 00:06:05.070 "bdev_lvol_inflate", 00:06:05.070 "bdev_lvol_rename", 00:06:05.070 "bdev_lvol_clone_bdev", 00:06:05.070 "bdev_lvol_clone", 00:06:05.070 "bdev_lvol_snapshot", 00:06:05.070 "bdev_lvol_create", 00:06:05.070 "bdev_lvol_delete_lvstore", 00:06:05.070 "bdev_lvol_rename_lvstore", 00:06:05.070 "bdev_lvol_create_lvstore", 00:06:05.070 "bdev_raid_set_options", 00:06:05.070 "bdev_raid_remove_base_bdev", 00:06:05.070 "bdev_raid_add_base_bdev", 00:06:05.070 "bdev_raid_delete", 00:06:05.070 "bdev_raid_create", 00:06:05.070 "bdev_raid_get_bdevs", 00:06:05.070 "bdev_error_inject_error", 00:06:05.070 "bdev_error_delete", 00:06:05.070 "bdev_error_create", 00:06:05.070 "bdev_split_delete", 00:06:05.070 "bdev_split_create", 00:06:05.070 "bdev_delay_delete", 00:06:05.070 "bdev_delay_create", 00:06:05.070 "bdev_delay_update_latency", 00:06:05.070 "bdev_zone_block_delete", 00:06:05.070 "bdev_zone_block_create", 00:06:05.070 "blobfs_create", 00:06:05.070 "blobfs_detect", 00:06:05.070 "blobfs_set_cache_size", 00:06:05.071 "bdev_aio_delete", 00:06:05.071 "bdev_aio_rescan", 00:06:05.071 "bdev_aio_create", 00:06:05.071 "bdev_ftl_set_property", 00:06:05.071 "bdev_ftl_get_properties", 00:06:05.071 "bdev_ftl_get_stats", 00:06:05.071 "bdev_ftl_unmap", 00:06:05.071 "bdev_ftl_unload", 00:06:05.071 "bdev_ftl_delete", 00:06:05.071 "bdev_ftl_load", 00:06:05.071 "bdev_ftl_create", 00:06:05.071 "bdev_virtio_attach_controller", 00:06:05.071 "bdev_virtio_scsi_get_devices", 00:06:05.071 "bdev_virtio_detach_controller", 00:06:05.071 "bdev_virtio_blk_set_hotplug", 00:06:05.071 "bdev_iscsi_delete", 00:06:05.071 "bdev_iscsi_create", 00:06:05.071 "bdev_iscsi_set_options", 00:06:05.071 "accel_error_inject_error", 00:06:05.071 "ioat_scan_accel_module", 00:06:05.071 "dsa_scan_accel_module", 00:06:05.071 "iaa_scan_accel_module", 00:06:05.071 "vfu_virtio_create_scsi_endpoint", 00:06:05.071 "vfu_virtio_scsi_remove_target", 00:06:05.071 "vfu_virtio_scsi_add_target", 00:06:05.071 "vfu_virtio_create_blk_endpoint", 00:06:05.071 "vfu_virtio_delete_endpoint", 00:06:05.071 "keyring_file_remove_key", 00:06:05.071 "keyring_file_add_key", 00:06:05.071 "keyring_linux_set_options", 00:06:05.071 "iscsi_get_histogram", 00:06:05.071 "iscsi_enable_histogram", 00:06:05.071 "iscsi_set_options", 00:06:05.071 "iscsi_get_auth_groups", 00:06:05.071 "iscsi_auth_group_remove_secret", 00:06:05.071 "iscsi_auth_group_add_secret", 00:06:05.071 "iscsi_delete_auth_group", 00:06:05.071 "iscsi_create_auth_group", 00:06:05.071 "iscsi_set_discovery_auth", 00:06:05.071 "iscsi_get_options", 00:06:05.071 "iscsi_target_node_request_logout", 00:06:05.071 "iscsi_target_node_set_redirect", 00:06:05.071 "iscsi_target_node_set_auth", 00:06:05.071 "iscsi_target_node_add_lun", 00:06:05.071 "iscsi_get_stats", 00:06:05.071 "iscsi_get_connections", 00:06:05.071 "iscsi_portal_group_set_auth", 00:06:05.071 "iscsi_start_portal_group", 00:06:05.071 "iscsi_delete_portal_group", 00:06:05.071 "iscsi_create_portal_group", 00:06:05.071 "iscsi_get_portal_groups", 00:06:05.071 "iscsi_delete_target_node", 00:06:05.071 "iscsi_target_node_remove_pg_ig_maps", 00:06:05.071 "iscsi_target_node_add_pg_ig_maps", 00:06:05.071 "iscsi_create_target_node", 00:06:05.071 "iscsi_get_target_nodes", 00:06:05.071 "iscsi_delete_initiator_group", 00:06:05.071 "iscsi_initiator_group_remove_initiators", 00:06:05.071 "iscsi_initiator_group_add_initiators", 00:06:05.071 "iscsi_create_initiator_group", 00:06:05.071 "iscsi_get_initiator_groups", 00:06:05.071 "nvmf_set_crdt", 00:06:05.071 "nvmf_set_config", 00:06:05.071 "nvmf_set_max_subsystems", 00:06:05.071 "nvmf_stop_mdns_prr", 00:06:05.071 "nvmf_publish_mdns_prr", 00:06:05.071 "nvmf_subsystem_get_listeners", 00:06:05.071 "nvmf_subsystem_get_qpairs", 00:06:05.071 "nvmf_subsystem_get_controllers", 00:06:05.071 "nvmf_get_stats", 00:06:05.071 "nvmf_get_transports", 00:06:05.071 "nvmf_create_transport", 00:06:05.071 "nvmf_get_targets", 00:06:05.071 "nvmf_delete_target", 00:06:05.071 "nvmf_create_target", 00:06:05.071 "nvmf_subsystem_allow_any_host", 00:06:05.071 "nvmf_subsystem_remove_host", 00:06:05.071 "nvmf_subsystem_add_host", 00:06:05.071 "nvmf_ns_remove_host", 00:06:05.071 "nvmf_ns_add_host", 00:06:05.071 "nvmf_subsystem_remove_ns", 00:06:05.071 "nvmf_subsystem_add_ns", 00:06:05.071 "nvmf_subsystem_listener_set_ana_state", 00:06:05.071 "nvmf_discovery_get_referrals", 00:06:05.071 "nvmf_discovery_remove_referral", 00:06:05.071 "nvmf_discovery_add_referral", 00:06:05.071 "nvmf_subsystem_remove_listener", 00:06:05.071 "nvmf_subsystem_add_listener", 00:06:05.071 "nvmf_delete_subsystem", 00:06:05.071 "nvmf_create_subsystem", 00:06:05.071 "nvmf_get_subsystems", 00:06:05.071 "env_dpdk_get_mem_stats", 00:06:05.071 "nbd_get_disks", 00:06:05.071 "nbd_stop_disk", 00:06:05.071 "nbd_start_disk", 00:06:05.071 "ublk_recover_disk", 00:06:05.071 "ublk_get_disks", 00:06:05.071 "ublk_stop_disk", 00:06:05.071 "ublk_start_disk", 00:06:05.071 "ublk_destroy_target", 00:06:05.071 "ublk_create_target", 00:06:05.071 "virtio_blk_create_transport", 00:06:05.071 "virtio_blk_get_transports", 00:06:05.071 "vhost_controller_set_coalescing", 00:06:05.071 "vhost_get_controllers", 00:06:05.071 "vhost_delete_controller", 00:06:05.071 "vhost_create_blk_controller", 00:06:05.071 "vhost_scsi_controller_remove_target", 00:06:05.071 "vhost_scsi_controller_add_target", 00:06:05.071 "vhost_start_scsi_controller", 00:06:05.071 "vhost_create_scsi_controller", 00:06:05.071 "thread_set_cpumask", 00:06:05.071 "framework_get_scheduler", 00:06:05.071 "framework_set_scheduler", 00:06:05.071 "framework_get_reactors", 00:06:05.071 "thread_get_io_channels", 00:06:05.071 "thread_get_pollers", 00:06:05.071 "thread_get_stats", 00:06:05.071 "framework_monitor_context_switch", 00:06:05.071 "spdk_kill_instance", 00:06:05.071 "log_enable_timestamps", 00:06:05.071 "log_get_flags", 00:06:05.071 "log_clear_flag", 00:06:05.071 "log_set_flag", 00:06:05.071 "log_get_level", 00:06:05.071 "log_set_level", 00:06:05.071 "log_get_print_level", 00:06:05.071 "log_set_print_level", 00:06:05.071 "framework_enable_cpumask_locks", 00:06:05.071 "framework_disable_cpumask_locks", 00:06:05.071 "framework_wait_init", 00:06:05.071 "framework_start_init", 00:06:05.071 "scsi_get_devices", 00:06:05.071 "bdev_get_histogram", 00:06:05.071 "bdev_enable_histogram", 00:06:05.071 "bdev_set_qos_limit", 00:06:05.071 "bdev_set_qd_sampling_period", 00:06:05.071 "bdev_get_bdevs", 00:06:05.071 "bdev_reset_iostat", 00:06:05.071 "bdev_get_iostat", 00:06:05.071 "bdev_examine", 00:06:05.071 "bdev_wait_for_examine", 00:06:05.071 "bdev_set_options", 00:06:05.071 "notify_get_notifications", 00:06:05.071 "notify_get_types", 00:06:05.071 "accel_get_stats", 00:06:05.071 "accel_set_options", 00:06:05.071 "accel_set_driver", 00:06:05.071 "accel_crypto_key_destroy", 00:06:05.071 "accel_crypto_keys_get", 00:06:05.071 "accel_crypto_key_create", 00:06:05.071 "accel_assign_opc", 00:06:05.071 "accel_get_module_info", 00:06:05.071 "accel_get_opc_assignments", 00:06:05.071 "vmd_rescan", 00:06:05.071 "vmd_remove_device", 00:06:05.071 "vmd_enable", 00:06:05.071 "sock_get_default_impl", 00:06:05.071 "sock_set_default_impl", 00:06:05.071 "sock_impl_set_options", 00:06:05.071 "sock_impl_get_options", 00:06:05.071 "iobuf_get_stats", 00:06:05.071 "iobuf_set_options", 00:06:05.071 "keyring_get_keys", 00:06:05.071 "framework_get_pci_devices", 00:06:05.071 "framework_get_config", 00:06:05.071 "framework_get_subsystems", 00:06:05.071 "vfu_tgt_set_base_path", 00:06:05.071 "trace_get_info", 00:06:05.071 "trace_get_tpoint_group_mask", 00:06:05.071 "trace_disable_tpoint_group", 00:06:05.071 "trace_enable_tpoint_group", 00:06:05.071 "trace_clear_tpoint_mask", 00:06:05.071 "trace_set_tpoint_mask", 00:06:05.071 "spdk_get_version", 00:06:05.071 "rpc_get_methods" 00:06:05.071 ] 00:06:05.071 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.071 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:05.071 18:39:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3405918 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3405918 ']' 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3405918 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3405918 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3405918' 00:06:05.071 killing process with pid 3405918 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3405918 00:06:05.071 18:39:16 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3405918 00:06:05.331 00:06:05.331 real 0m1.196s 00:06:05.331 user 0m2.135s 00:06:05.331 sys 0m0.445s 00:06:05.331 18:39:17 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.331 18:39:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.331 ************************************ 00:06:05.331 END TEST spdkcli_tcp 00:06:05.331 ************************************ 00:06:05.331 18:39:17 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.331 18:39:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.331 18:39:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.331 18:39:17 -- common/autotest_common.sh@10 -- # set +x 00:06:05.590 ************************************ 00:06:05.590 START TEST dpdk_mem_utility 00:06:05.590 ************************************ 00:06:05.590 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.590 * Looking for test storage... 00:06:05.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:05.590 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:05.590 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3406118 00:06:05.590 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.590 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3406118 00:06:05.590 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3406118 ']' 00:06:05.590 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.590 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.590 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.590 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.590 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.590 [2024-07-25 18:39:17.322297] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:05.590 [2024-07-25 18:39:17.322399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406118 ] 00:06:05.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.590 [2024-07-25 18:39:17.378582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.590 [2024-07-25 18:39:17.462252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.849 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.849 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:05.849 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:05.849 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:05.849 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.849 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.849 { 00:06:05.849 "filename": "/tmp/spdk_mem_dump.txt" 00:06:05.849 } 00:06:05.849 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.849 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:06.115 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:06.115 1 heaps totaling size 814.000000 MiB 00:06:06.115 size: 814.000000 MiB heap id: 0 00:06:06.115 end heaps---------- 00:06:06.115 8 mempools totaling size 598.116089 MiB 00:06:06.115 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:06.115 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:06.115 size: 84.521057 MiB name: bdev_io_3406118 00:06:06.115 size: 51.011292 MiB name: evtpool_3406118 00:06:06.115 size: 50.003479 MiB name: msgpool_3406118 00:06:06.115 size: 21.763794 MiB name: PDU_Pool 00:06:06.115 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:06.115 size: 0.026123 MiB name: Session_Pool 00:06:06.115 end mempools------- 00:06:06.115 6 memzones totaling size 4.142822 MiB 00:06:06.115 size: 1.000366 MiB name: RG_ring_0_3406118 00:06:06.115 size: 1.000366 MiB name: RG_ring_1_3406118 00:06:06.115 size: 1.000366 MiB name: RG_ring_4_3406118 00:06:06.115 size: 1.000366 MiB name: RG_ring_5_3406118 00:06:06.115 size: 0.125366 MiB name: RG_ring_2_3406118 00:06:06.115 size: 0.015991 MiB name: RG_ring_3_3406118 00:06:06.115 end memzones------- 00:06:06.115 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:06.115 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:06.115 list of free elements. size: 12.519348 MiB 00:06:06.115 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:06.115 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:06.115 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:06.115 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:06.115 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:06.115 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:06.115 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:06.115 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:06.115 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:06.115 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:06.115 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:06.115 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:06.115 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:06.115 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:06.115 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:06.115 list of standard malloc elements. size: 199.218079 MiB 00:06:06.115 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:06.115 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:06.115 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:06.115 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:06.115 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:06.115 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:06.115 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:06.115 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:06.115 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:06.115 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:06.115 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:06.115 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:06.115 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:06.115 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:06.115 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:06.115 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:06.115 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:06.115 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:06.115 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:06.115 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:06.116 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:06.116 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:06.116 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:06.116 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:06.116 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:06.116 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:06.116 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:06.116 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:06.116 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:06.116 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:06.116 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:06.116 list of memzone associated elements. size: 602.262573 MiB 00:06:06.116 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:06.116 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:06.116 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:06.116 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:06.116 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:06.116 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3406118_0 00:06:06.116 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:06.116 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3406118_0 00:06:06.116 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:06.116 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3406118_0 00:06:06.116 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:06.116 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:06.116 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:06.116 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:06.116 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:06.116 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3406118 00:06:06.116 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:06.116 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3406118 00:06:06.116 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:06.116 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3406118 00:06:06.116 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:06.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:06.116 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:06.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:06.116 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:06.116 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:06.116 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:06.116 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:06.116 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:06.116 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3406118 00:06:06.116 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:06.116 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3406118 00:06:06.116 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:06.116 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3406118 00:06:06.116 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:06.116 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3406118 00:06:06.116 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:06.116 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3406118 00:06:06.116 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:06.116 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:06.116 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:06.116 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:06.116 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:06.116 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:06.116 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:06.116 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3406118 00:06:06.116 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:06.116 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:06.116 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:06.116 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:06.116 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:06.116 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3406118 00:06:06.116 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:06.116 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:06.116 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:06.116 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3406118 00:06:06.116 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:06.116 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3406118 00:06:06.116 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:06.116 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:06.116 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:06.116 18:39:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3406118 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3406118 ']' 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3406118 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3406118 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3406118' 00:06:06.116 killing process with pid 3406118 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3406118 00:06:06.116 18:39:17 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3406118 00:06:06.719 00:06:06.719 real 0m1.052s 00:06:06.719 user 0m1.019s 00:06:06.719 sys 0m0.414s 00:06:06.719 18:39:18 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.719 18:39:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.719 ************************************ 00:06:06.719 END TEST dpdk_mem_utility 00:06:06.719 ************************************ 00:06:06.719 18:39:18 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.719 18:39:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.719 18:39:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.719 18:39:18 -- common/autotest_common.sh@10 -- # set +x 00:06:06.719 ************************************ 00:06:06.719 START TEST event 00:06:06.719 ************************************ 00:06:06.719 18:39:18 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.719 * Looking for test storage... 00:06:06.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:06.719 18:39:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:06.719 18:39:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.719 18:39:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.719 18:39:18 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:06.719 18:39:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.719 18:39:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.719 ************************************ 00:06:06.719 START TEST event_perf 00:06:06.719 ************************************ 00:06:06.719 18:39:18 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.719 Running I/O for 1 seconds...[2024-07-25 18:39:18.412018] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:06.719 [2024-07-25 18:39:18.412135] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406310 ] 00:06:06.720 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.720 [2024-07-25 18:39:18.473575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.720 [2024-07-25 18:39:18.566335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.720 [2024-07-25 18:39:18.566389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.720 [2024-07-25 18:39:18.566506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.720 [2024-07-25 18:39:18.566508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.102 Running I/O for 1 seconds... 00:06:08.102 lcore 0: 234298 00:06:08.102 lcore 1: 234297 00:06:08.102 lcore 2: 234298 00:06:08.102 lcore 3: 234297 00:06:08.102 done. 00:06:08.102 00:06:08.102 real 0m1.251s 00:06:08.102 user 0m4.160s 00:06:08.102 sys 0m0.087s 00:06:08.102 18:39:19 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.102 18:39:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.102 ************************************ 00:06:08.102 END TEST event_perf 00:06:08.102 ************************************ 00:06:08.102 18:39:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.102 18:39:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:08.102 18:39:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.102 18:39:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.102 ************************************ 00:06:08.102 START TEST event_reactor 00:06:08.102 ************************************ 00:06:08.102 18:39:19 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.102 [2024-07-25 18:39:19.714301] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:08.102 [2024-07-25 18:39:19.714366] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406466 ] 00:06:08.102 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.102 [2024-07-25 18:39:19.780733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.102 [2024-07-25 18:39:19.872020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.482 test_start 00:06:09.482 oneshot 00:06:09.482 tick 100 00:06:09.482 tick 100 00:06:09.482 tick 250 00:06:09.482 tick 100 00:06:09.482 tick 100 00:06:09.482 tick 100 00:06:09.482 tick 250 00:06:09.482 tick 500 00:06:09.482 tick 100 00:06:09.482 tick 100 00:06:09.482 tick 250 00:06:09.482 tick 100 00:06:09.482 tick 100 00:06:09.482 test_end 00:06:09.482 00:06:09.482 real 0m1.254s 00:06:09.482 user 0m1.158s 00:06:09.482 sys 0m0.091s 00:06:09.482 18:39:20 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.482 18:39:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.482 ************************************ 00:06:09.482 END TEST event_reactor 00:06:09.482 ************************************ 00:06:09.482 18:39:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.482 18:39:20 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:09.482 18:39:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.482 18:39:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.482 ************************************ 00:06:09.482 START TEST event_reactor_perf 00:06:09.482 ************************************ 00:06:09.482 18:39:21 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.482 [2024-07-25 18:39:21.018469] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:09.482 [2024-07-25 18:39:21.018535] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406626 ] 00:06:09.482 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.482 [2024-07-25 18:39:21.081525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.482 [2024-07-25 18:39:21.171053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.417 test_start 00:06:10.417 test_end 00:06:10.417 Performance: 349717 events per second 00:06:10.417 00:06:10.417 real 0m1.250s 00:06:10.417 user 0m1.159s 00:06:10.417 sys 0m0.085s 00:06:10.417 18:39:22 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.417 18:39:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.417 ************************************ 00:06:10.417 END TEST event_reactor_perf 00:06:10.417 ************************************ 00:06:10.417 18:39:22 event -- event/event.sh@49 -- # uname -s 00:06:10.417 18:39:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:10.417 18:39:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:10.417 18:39:22 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.417 18:39:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.417 18:39:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.676 ************************************ 00:06:10.676 START TEST event_scheduler 00:06:10.676 ************************************ 00:06:10.676 18:39:22 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:10.676 * Looking for test storage... 00:06:10.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:10.676 18:39:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:10.676 18:39:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3406808 00:06:10.676 18:39:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:10.676 18:39:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.676 18:39:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3406808 00:06:10.676 18:39:22 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3406808 ']' 00:06:10.676 18:39:22 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.676 18:39:22 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.676 18:39:22 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.676 18:39:22 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.676 18:39:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.676 [2024-07-25 18:39:22.393746] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:10.676 [2024-07-25 18:39:22.393815] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3406808 ] 00:06:10.676 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.676 [2024-07-25 18:39:22.448487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.676 [2024-07-25 18:39:22.533572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.676 [2024-07-25 18:39:22.533631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.676 [2024-07-25 18:39:22.533701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.676 [2024-07-25 18:39:22.533704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:10.935 18:39:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.935 POWER: Env isn't set yet! 00:06:10.935 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:10.935 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:10.935 POWER: Cannot get available frequencies of lcore 0 00:06:10.935 POWER: Attempting to initialise PSTAT power management... 00:06:10.935 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:10.935 POWER: Initialized successfully for lcore 0 power management 00:06:10.935 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:10.935 POWER: Initialized successfully for lcore 1 power management 00:06:10.935 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:10.935 POWER: Initialized successfully for lcore 2 power management 00:06:10.935 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:10.935 POWER: Initialized successfully for lcore 3 power management 00:06:10.935 [2024-07-25 18:39:22.635283] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.935 [2024-07-25 18:39:22.635300] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.935 [2024-07-25 18:39:22.635310] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.935 18:39:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.935 [2024-07-25 18:39:22.733208] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.935 18:39:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.935 18:39:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.935 ************************************ 00:06:10.935 START TEST scheduler_create_thread 00:06:10.935 ************************************ 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.936 2 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.936 3 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.936 4 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.936 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.196 5 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.196 6 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.196 7 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.196 8 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.196 9 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.196 10 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.196 18:39:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.134 18:39:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.134 18:39:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:12.134 18:39:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.134 18:39:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.511 18:39:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.511 18:39:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:13.511 18:39:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:13.511 18:39:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.511 18:39:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.448 18:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.448 00:06:14.448 real 0m3.381s 00:06:14.448 user 0m0.013s 00:06:14.448 sys 0m0.002s 00:06:14.448 18:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.448 18:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.448 ************************************ 00:06:14.448 END TEST scheduler_create_thread 00:06:14.448 ************************************ 00:06:14.448 18:39:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:14.448 18:39:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3406808 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3406808 ']' 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3406808 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3406808 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3406808' 00:06:14.448 killing process with pid 3406808 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3406808 00:06:14.448 18:39:26 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3406808 00:06:14.708 [2024-07-25 18:39:26.522056] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.966 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:14.966 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:14.966 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:14.966 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:14.966 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:14.966 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:14.966 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:14.966 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:14.966 00:06:14.966 real 0m4.484s 00:06:14.966 user 0m8.033s 00:06:14.966 sys 0m0.302s 00:06:14.966 18:39:26 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.966 18:39:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.966 ************************************ 00:06:14.967 END TEST event_scheduler 00:06:14.967 ************************************ 00:06:14.967 18:39:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.967 18:39:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.967 18:39:26 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.967 18:39:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.967 18:39:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.967 ************************************ 00:06:14.967 START TEST app_repeat 00:06:14.967 ************************************ 00:06:14.967 18:39:26 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:14.967 18:39:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.967 18:39:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.967 18:39:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:14.967 18:39:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.967 18:39:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:14.967 18:39:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:14.967 18:39:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:15.226 18:39:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3407395 00:06:15.226 18:39:26 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.226 18:39:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.226 18:39:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3407395' 00:06:15.226 Process app_repeat pid: 3407395 00:06:15.226 18:39:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.226 18:39:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.226 spdk_app_start Round 0 00:06:15.226 18:39:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3407395 /var/tmp/spdk-nbd.sock 00:06:15.226 18:39:26 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3407395 ']' 00:06:15.226 18:39:26 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.226 18:39:26 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.226 18:39:26 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.226 18:39:26 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.226 18:39:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.226 [2024-07-25 18:39:26.863339] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:15.226 [2024-07-25 18:39:26.863403] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407395 ] 00:06:15.226 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.226 [2024-07-25 18:39:26.929595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.226 [2024-07-25 18:39:27.030087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.226 [2024-07-25 18:39:27.030093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.484 18:39:27 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.484 18:39:27 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:15.484 18:39:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.742 Malloc0 00:06:15.742 18:39:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.999 Malloc1 00:06:15.999 18:39:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.999 18:39:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.257 /dev/nbd0 00:06:16.257 18:39:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.257 18:39:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.257 1+0 records in 00:06:16.257 1+0 records out 00:06:16.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181296 s, 22.6 MB/s 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:16.257 18:39:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:16.257 18:39:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.257 18:39:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.257 18:39:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.515 /dev/nbd1 00:06:16.515 18:39:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.515 18:39:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.515 1+0 records in 00:06:16.515 1+0 records out 00:06:16.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196715 s, 20.8 MB/s 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:16.515 18:39:28 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:16.515 18:39:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.515 18:39:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.515 18:39:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.515 18:39:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.515 18:39:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.773 { 00:06:16.773 "nbd_device": "/dev/nbd0", 00:06:16.773 "bdev_name": "Malloc0" 00:06:16.773 }, 00:06:16.773 { 00:06:16.773 "nbd_device": "/dev/nbd1", 00:06:16.773 "bdev_name": "Malloc1" 00:06:16.773 } 00:06:16.773 ]' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.773 { 00:06:16.773 "nbd_device": "/dev/nbd0", 00:06:16.773 "bdev_name": "Malloc0" 00:06:16.773 }, 00:06:16.773 { 00:06:16.773 "nbd_device": "/dev/nbd1", 00:06:16.773 "bdev_name": "Malloc1" 00:06:16.773 } 00:06:16.773 ]' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.773 /dev/nbd1' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.773 /dev/nbd1' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.773 256+0 records in 00:06:16.773 256+0 records out 00:06:16.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505495 s, 207 MB/s 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.773 256+0 records in 00:06:16.773 256+0 records out 00:06:16.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237595 s, 44.1 MB/s 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.773 256+0 records in 00:06:16.773 256+0 records out 00:06:16.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275851 s, 38.0 MB/s 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.773 18:39:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.031 18:39:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.289 18:39:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.547 18:39:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.547 18:39:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.547 18:39:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.805 18:39:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.805 18:39:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.065 18:39:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.065 [2024-07-25 18:39:29.936760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.325 [2024-07-25 18:39:30.031681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.325 [2024-07-25 18:39:30.031685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.325 [2024-07-25 18:39:30.090972] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.325 [2024-07-25 18:39:30.091043] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.863 18:39:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.863 18:39:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:20.863 spdk_app_start Round 1 00:06:20.863 18:39:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3407395 /var/tmp/spdk-nbd.sock 00:06:20.863 18:39:32 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3407395 ']' 00:06:20.863 18:39:32 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.863 18:39:32 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.863 18:39:32 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.863 18:39:32 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.863 18:39:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.121 18:39:32 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.121 18:39:32 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:21.121 18:39:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.378 Malloc0 00:06:21.378 18:39:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.635 Malloc1 00:06:21.635 18:39:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.635 18:39:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.892 /dev/nbd0 00:06:21.892 18:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.892 18:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.892 1+0 records in 00:06:21.892 1+0 records out 00:06:21.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211197 s, 19.4 MB/s 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:21.892 18:39:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:21.892 18:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.892 18:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.892 18:39:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.150 /dev/nbd1 00:06:22.150 18:39:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.407 18:39:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.407 1+0 records in 00:06:22.407 1+0 records out 00:06:22.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178093 s, 23.0 MB/s 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:22.407 18:39:34 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:22.407 18:39:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.407 18:39:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.407 18:39:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.407 18:39:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.407 18:39:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.665 { 00:06:22.665 "nbd_device": "/dev/nbd0", 00:06:22.665 "bdev_name": "Malloc0" 00:06:22.665 }, 00:06:22.665 { 00:06:22.665 "nbd_device": "/dev/nbd1", 00:06:22.665 "bdev_name": "Malloc1" 00:06:22.665 } 00:06:22.665 ]' 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.665 { 00:06:22.665 "nbd_device": "/dev/nbd0", 00:06:22.665 "bdev_name": "Malloc0" 00:06:22.665 }, 00:06:22.665 { 00:06:22.665 "nbd_device": "/dev/nbd1", 00:06:22.665 "bdev_name": "Malloc1" 00:06:22.665 } 00:06:22.665 ]' 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.665 /dev/nbd1' 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.665 /dev/nbd1' 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.665 18:39:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.666 256+0 records in 00:06:22.666 256+0 records out 00:06:22.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524025 s, 200 MB/s 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.666 256+0 records in 00:06:22.666 256+0 records out 00:06:22.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271472 s, 38.6 MB/s 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.666 256+0 records in 00:06:22.666 256+0 records out 00:06:22.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261329 s, 40.1 MB/s 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.666 18:39:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.923 18:39:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.182 18:39:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.440 18:39:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.440 18:39:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.746 18:39:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.009 [2024-07-25 18:39:35.764331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.009 [2024-07-25 18:39:35.853902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.009 [2024-07-25 18:39:35.853906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.267 [2024-07-25 18:39:35.916262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.267 [2024-07-25 18:39:35.916328] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.803 18:39:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.803 18:39:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:26.803 spdk_app_start Round 2 00:06:26.803 18:39:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3407395 /var/tmp/spdk-nbd.sock 00:06:26.803 18:39:38 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3407395 ']' 00:06:26.803 18:39:38 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.803 18:39:38 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.803 18:39:38 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.803 18:39:38 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.803 18:39:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.060 18:39:38 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.060 18:39:38 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:27.060 18:39:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.317 Malloc0 00:06:27.317 18:39:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.574 Malloc1 00:06:27.574 18:39:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.574 18:39:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.574 18:39:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.574 18:39:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.574 18:39:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.574 18:39:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.574 18:39:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.574 18:39:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.574 18:39:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.575 18:39:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.575 18:39:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.575 18:39:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.575 18:39:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.575 18:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.575 18:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.575 18:39:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.833 /dev/nbd0 00:06:27.833 18:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.833 18:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.833 1+0 records in 00:06:27.833 1+0 records out 00:06:27.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174807 s, 23.4 MB/s 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:27.833 18:39:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:27.834 18:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.834 18:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.834 18:39:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.092 /dev/nbd1 00:06:28.092 18:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.092 18:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.092 1+0 records in 00:06:28.092 1+0 records out 00:06:28.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217585 s, 18.8 MB/s 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:28.092 18:39:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:28.092 18:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.092 18:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.092 18:39:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.092 18:39:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.092 18:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.350 18:39:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.350 { 00:06:28.350 "nbd_device": "/dev/nbd0", 00:06:28.350 "bdev_name": "Malloc0" 00:06:28.350 }, 00:06:28.350 { 00:06:28.350 "nbd_device": "/dev/nbd1", 00:06:28.350 "bdev_name": "Malloc1" 00:06:28.350 } 00:06:28.350 ]' 00:06:28.350 18:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.350 { 00:06:28.350 "nbd_device": "/dev/nbd0", 00:06:28.350 "bdev_name": "Malloc0" 00:06:28.350 }, 00:06:28.350 { 00:06:28.350 "nbd_device": "/dev/nbd1", 00:06:28.350 "bdev_name": "Malloc1" 00:06:28.350 } 00:06:28.350 ]' 00:06:28.350 18:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.350 18:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.350 /dev/nbd1' 00:06:28.350 18:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.350 /dev/nbd1' 00:06:28.350 18:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.350 18:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.351 256+0 records in 00:06:28.351 256+0 records out 00:06:28.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499147 s, 210 MB/s 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.351 256+0 records in 00:06:28.351 256+0 records out 00:06:28.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221578 s, 47.3 MB/s 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.351 18:39:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.611 256+0 records in 00:06:28.611 256+0 records out 00:06:28.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027462 s, 38.2 MB/s 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.611 18:39:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.870 18:39:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.128 18:39:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.386 18:39:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.386 18:39:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.645 18:39:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.904 [2024-07-25 18:39:41.584666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.904 [2024-07-25 18:39:41.674085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.904 [2024-07-25 18:39:41.674085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.904 [2024-07-25 18:39:41.736289] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.904 [2024-07-25 18:39:41.736379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.193 18:39:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3407395 /var/tmp/spdk-nbd.sock 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3407395 ']' 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:33.193 18:39:44 event.app_repeat -- event/event.sh@39 -- # killprocess 3407395 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3407395 ']' 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3407395 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3407395 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3407395' 00:06:33.193 killing process with pid 3407395 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3407395 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3407395 00:06:33.193 spdk_app_start is called in Round 0. 00:06:33.193 Shutdown signal received, stop current app iteration 00:06:33.193 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:33.193 spdk_app_start is called in Round 1. 00:06:33.193 Shutdown signal received, stop current app iteration 00:06:33.193 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:33.193 spdk_app_start is called in Round 2. 00:06:33.193 Shutdown signal received, stop current app iteration 00:06:33.193 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:33.193 spdk_app_start is called in Round 3. 00:06:33.193 Shutdown signal received, stop current app iteration 00:06:33.193 18:39:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.193 18:39:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.193 00:06:33.193 real 0m17.988s 00:06:33.193 user 0m39.252s 00:06:33.193 sys 0m3.222s 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.193 18:39:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.193 ************************************ 00:06:33.193 END TEST app_repeat 00:06:33.193 ************************************ 00:06:33.193 18:39:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.193 18:39:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.193 18:39:44 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.193 18:39:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.193 18:39:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.193 ************************************ 00:06:33.193 START TEST cpu_locks 00:06:33.193 ************************************ 00:06:33.193 18:39:44 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.193 * Looking for test storage... 00:06:33.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:33.193 18:39:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.193 18:39:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.193 18:39:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.193 18:39:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.193 18:39:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.193 18:39:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.193 18:39:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.193 ************************************ 00:06:33.193 START TEST default_locks 00:06:33.193 ************************************ 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3409742 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3409742 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3409742 ']' 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.193 18:39:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.193 [2024-07-25 18:39:45.010468] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:33.193 [2024-07-25 18:39:45.010545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409742 ] 00:06:33.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.193 [2024-07-25 18:39:45.068680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.451 [2024-07-25 18:39:45.153923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.711 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.711 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:33.711 18:39:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3409742 00:06:33.711 18:39:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3409742 00:06:33.711 18:39:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.967 lslocks: write error 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3409742 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3409742 ']' 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3409742 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3409742 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3409742' 00:06:33.967 killing process with pid 3409742 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3409742 00:06:33.967 18:39:45 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3409742 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3409742 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3409742 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3409742 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3409742 ']' 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3409742) - No such process 00:06:34.534 ERROR: process (pid: 3409742) is no longer running 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.534 00:06:34.534 real 0m1.229s 00:06:34.534 user 0m1.168s 00:06:34.534 sys 0m0.528s 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.534 18:39:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.534 ************************************ 00:06:34.534 END TEST default_locks 00:06:34.534 ************************************ 00:06:34.534 18:39:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:34.534 18:39:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.534 18:39:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.534 18:39:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.534 ************************************ 00:06:34.534 START TEST default_locks_via_rpc 00:06:34.534 ************************************ 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3409910 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3409910 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3409910 ']' 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.534 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.534 [2024-07-25 18:39:46.283258] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:34.534 [2024-07-25 18:39:46.283356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409910 ] 00:06:34.534 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.534 [2024-07-25 18:39:46.340076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.794 [2024-07-25 18:39:46.429476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3409910 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3409910 00:06:35.051 18:39:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3409910 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3409910 ']' 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3409910 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3409910 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3409910' 00:06:35.310 killing process with pid 3409910 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3409910 00:06:35.310 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3409910 00:06:35.878 00:06:35.878 real 0m1.264s 00:06:35.878 user 0m1.212s 00:06:35.878 sys 0m0.534s 00:06:35.879 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.879 18:39:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.879 ************************************ 00:06:35.879 END TEST default_locks_via_rpc 00:06:35.879 ************************************ 00:06:35.879 18:39:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:35.879 18:39:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.879 18:39:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.879 18:39:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.879 ************************************ 00:06:35.879 START TEST non_locking_app_on_locked_coremask 00:06:35.879 ************************************ 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3410088 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3410088 /var/tmp/spdk.sock 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3410088 ']' 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.879 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.879 [2024-07-25 18:39:47.594383] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:35.879 [2024-07-25 18:39:47.594459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410088 ] 00:06:35.879 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.879 [2024-07-25 18:39:47.650737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.879 [2024-07-25 18:39:47.740088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3410190 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3410190 /var/tmp/spdk2.sock 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3410190 ']' 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.137 18:39:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.396 [2024-07-25 18:39:48.039286] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:36.396 [2024-07-25 18:39:48.039378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410190 ] 00:06:36.396 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.396 [2024-07-25 18:39:48.135068] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.396 [2024-07-25 18:39:48.135105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.656 [2024-07-25 18:39:48.318870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.221 18:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.221 18:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:37.221 18:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3410088 00:06:37.221 18:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3410088 00:06:37.221 18:39:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.788 lslocks: write error 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3410088 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3410088 ']' 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3410088 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410088 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410088' 00:06:37.788 killing process with pid 3410088 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3410088 00:06:37.788 18:39:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3410088 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3410190 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3410190 ']' 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3410190 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410190 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410190' 00:06:38.725 killing process with pid 3410190 00:06:38.725 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3410190 00:06:38.726 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3410190 00:06:38.983 00:06:38.983 real 0m3.145s 00:06:38.983 user 0m3.271s 00:06:38.983 sys 0m1.050s 00:06:38.983 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.983 18:39:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 ************************************ 00:06:38.983 END TEST non_locking_app_on_locked_coremask 00:06:38.983 ************************************ 00:06:38.983 18:39:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:38.983 18:39:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.983 18:39:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.984 18:39:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.984 ************************************ 00:06:38.984 START TEST locking_app_on_unlocked_coremask 00:06:38.984 ************************************ 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3410506 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3410506 /var/tmp/spdk.sock 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3410506 ']' 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.984 18:39:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.984 [2024-07-25 18:39:50.791082] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:38.984 [2024-07-25 18:39:50.791187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410506 ] 00:06:38.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.984 [2024-07-25 18:39:50.852545] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.984 [2024-07-25 18:39:50.852582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.242 [2024-07-25 18:39:50.941116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.500 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.500 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3410630 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3410630 /var/tmp/spdk2.sock 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3410630 ']' 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.501 18:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.501 [2024-07-25 18:39:51.251274] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:39.501 [2024-07-25 18:39:51.251372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410630 ] 00:06:39.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.501 [2024-07-25 18:39:51.346372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.760 [2024-07-25 18:39:51.529177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.328 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.328 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:40.328 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3410630 00:06:40.328 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3410630 00:06:40.328 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.895 lslocks: write error 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3410506 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3410506 ']' 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3410506 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410506 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410506' 00:06:40.895 killing process with pid 3410506 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3410506 00:06:40.895 18:39:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3410506 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3410630 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3410630 ']' 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3410630 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410630 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410630' 00:06:41.852 killing process with pid 3410630 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3410630 00:06:41.852 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3410630 00:06:42.425 00:06:42.425 real 0m3.255s 00:06:42.425 user 0m3.422s 00:06:42.425 sys 0m1.050s 00:06:42.425 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.425 18:39:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.425 ************************************ 00:06:42.425 END TEST locking_app_on_unlocked_coremask 00:06:42.425 ************************************ 00:06:42.425 18:39:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:42.425 18:39:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.425 18:39:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.425 18:39:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.425 ************************************ 00:06:42.425 START TEST locking_app_on_locked_coremask 00:06:42.425 ************************************ 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3410942 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3410942 /var/tmp/spdk.sock 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3410942 ']' 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.425 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.425 [2024-07-25 18:39:54.098323] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:42.425 [2024-07-25 18:39:54.098407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410942 ] 00:06:42.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.425 [2024-07-25 18:39:54.160333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.425 [2024-07-25 18:39:54.248805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3411066 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3411066 /var/tmp/spdk2.sock 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3411066 /var/tmp/spdk2.sock 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3411066 /var/tmp/spdk2.sock 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3411066 ']' 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.684 18:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.684 [2024-07-25 18:39:54.551002] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:42.684 [2024-07-25 18:39:54.551103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411066 ] 00:06:42.941 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.941 [2024-07-25 18:39:54.647514] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3410942 has claimed it. 00:06:42.941 [2024-07-25 18:39:54.647591] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3411066) - No such process 00:06:43.508 ERROR: process (pid: 3411066) is no longer running 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3410942 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3410942 00:06:43.508 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.768 lslocks: write error 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3410942 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3410942 ']' 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3410942 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3410942 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3410942' 00:06:43.768 killing process with pid 3410942 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3410942 00:06:43.768 18:39:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3410942 00:06:44.335 00:06:44.335 real 0m1.967s 00:06:44.335 user 0m2.105s 00:06:44.335 sys 0m0.642s 00:06:44.335 18:39:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.335 18:39:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.335 ************************************ 00:06:44.335 END TEST locking_app_on_locked_coremask 00:06:44.335 ************************************ 00:06:44.335 18:39:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.335 18:39:56 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.335 18:39:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.335 18:39:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.335 ************************************ 00:06:44.336 START TEST locking_overlapped_coremask 00:06:44.336 ************************************ 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3411233 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3411233 /var/tmp/spdk.sock 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3411233 ']' 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.336 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.336 [2024-07-25 18:39:56.111816] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:44.336 [2024-07-25 18:39:56.111892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411233 ] 00:06:44.336 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.336 [2024-07-25 18:39:56.169517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.596 [2024-07-25 18:39:56.259251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.596 [2024-07-25 18:39:56.259311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.596 [2024-07-25 18:39:56.259313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3411251 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3411251 /var/tmp/spdk2.sock 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3411251 /var/tmp/spdk2.sock 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3411251 /var/tmp/spdk2.sock 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3411251 ']' 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.854 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.855 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.855 18:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.855 [2024-07-25 18:39:56.560391] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:44.855 [2024-07-25 18:39:56.560487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411251 ] 00:06:44.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.855 [2024-07-25 18:39:56.651658] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3411233 has claimed it. 00:06:44.855 [2024-07-25 18:39:56.651723] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3411251) - No such process 00:06:45.424 ERROR: process (pid: 3411251) is no longer running 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3411233 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3411233 ']' 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3411233 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3411233 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3411233' 00:06:45.424 killing process with pid 3411233 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3411233 00:06:45.424 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3411233 00:06:45.994 00:06:45.994 real 0m1.634s 00:06:45.994 user 0m4.430s 00:06:45.994 sys 0m0.457s 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.994 ************************************ 00:06:45.994 END TEST locking_overlapped_coremask 00:06:45.994 ************************************ 00:06:45.994 18:39:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:45.994 18:39:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.994 18:39:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.994 18:39:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.994 ************************************ 00:06:45.994 START TEST locking_overlapped_coremask_via_rpc 00:06:45.994 ************************************ 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3411458 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3411458 /var/tmp/spdk.sock 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3411458 ']' 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.994 18:39:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.994 [2024-07-25 18:39:57.795530] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:45.994 [2024-07-25 18:39:57.795615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411458 ] 00:06:45.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.994 [2024-07-25 18:39:57.853268] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.994 [2024-07-25 18:39:57.853305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.254 [2024-07-25 18:39:57.944292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.254 [2024-07-25 18:39:57.944348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.254 [2024-07-25 18:39:57.944351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3411543 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3411543 /var/tmp/spdk2.sock 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3411543 ']' 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.513 18:39:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.513 [2024-07-25 18:39:58.237633] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:46.513 [2024-07-25 18:39:58.237727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411543 ] 00:06:46.513 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.513 [2024-07-25 18:39:58.326428] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.513 [2024-07-25 18:39:58.326463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.773 [2024-07-25 18:39:58.498066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.773 [2024-07-25 18:39:58.501159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.773 [2024-07-25 18:39:58.501162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.340 [2024-07-25 18:39:59.189167] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3411458 has claimed it. 00:06:47.340 request: 00:06:47.340 { 00:06:47.340 "method": "framework_enable_cpumask_locks", 00:06:47.340 "req_id": 1 00:06:47.340 } 00:06:47.340 Got JSON-RPC error response 00:06:47.340 response: 00:06:47.340 { 00:06:47.340 "code": -32603, 00:06:47.340 "message": "Failed to claim CPU core: 2" 00:06:47.340 } 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3411458 /var/tmp/spdk.sock 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3411458 ']' 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.340 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3411543 /var/tmp/spdk2.sock 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3411543 ']' 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.598 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.857 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.857 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:47.857 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.857 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.857 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.857 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.857 00:06:47.857 real 0m1.967s 00:06:47.857 user 0m1.022s 00:06:47.857 sys 0m0.190s 00:06:47.857 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.857 18:39:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.857 ************************************ 00:06:47.857 END TEST locking_overlapped_coremask_via_rpc 00:06:47.857 ************************************ 00:06:47.857 18:39:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.857 18:39:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3411458 ]] 00:06:47.857 18:39:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3411458 00:06:47.857 18:39:59 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3411458 ']' 00:06:47.857 18:39:59 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3411458 00:06:47.857 18:39:59 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:48.115 18:39:59 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.115 18:39:59 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3411458 00:06:48.115 18:39:59 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.115 18:39:59 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.115 18:39:59 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3411458' 00:06:48.115 killing process with pid 3411458 00:06:48.115 18:39:59 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3411458 00:06:48.115 18:39:59 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3411458 00:06:48.374 18:40:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3411543 ]] 00:06:48.374 18:40:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3411543 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3411543 ']' 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3411543 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3411543 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3411543' 00:06:48.374 killing process with pid 3411543 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3411543 00:06:48.374 18:40:00 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3411543 00:06:48.941 18:40:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.941 18:40:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:48.941 18:40:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3411458 ]] 00:06:48.941 18:40:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3411458 00:06:48.941 18:40:00 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3411458 ']' 00:06:48.941 18:40:00 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3411458 00:06:48.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3411458) - No such process 00:06:48.941 18:40:00 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3411458 is not found' 00:06:48.941 Process with pid 3411458 is not found 00:06:48.941 18:40:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3411543 ]] 00:06:48.941 18:40:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3411543 00:06:48.941 18:40:00 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3411543 ']' 00:06:48.941 18:40:00 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3411543 00:06:48.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3411543) - No such process 00:06:48.941 18:40:00 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3411543 is not found' 00:06:48.941 Process with pid 3411543 is not found 00:06:48.941 18:40:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.941 00:06:48.941 real 0m15.717s 00:06:48.941 user 0m27.441s 00:06:48.941 sys 0m5.342s 00:06:48.941 18:40:00 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.941 18:40:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.941 ************************************ 00:06:48.941 END TEST cpu_locks 00:06:48.941 ************************************ 00:06:48.941 00:06:48.941 real 0m42.300s 00:06:48.941 user 1m21.344s 00:06:48.941 sys 0m9.367s 00:06:48.941 18:40:00 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.941 18:40:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.941 ************************************ 00:06:48.941 END TEST event 00:06:48.941 ************************************ 00:06:48.941 18:40:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:48.941 18:40:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.941 18:40:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.941 18:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:48.941 ************************************ 00:06:48.941 START TEST thread 00:06:48.941 ************************************ 00:06:48.941 18:40:00 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:48.941 * Looking for test storage... 00:06:48.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:48.941 18:40:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.941 18:40:00 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:48.941 18:40:00 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.941 18:40:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.941 ************************************ 00:06:48.941 START TEST thread_poller_perf 00:06:48.941 ************************************ 00:06:48.941 18:40:00 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.941 [2024-07-25 18:40:00.740921] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:48.941 [2024-07-25 18:40:00.740984] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411909 ] 00:06:48.941 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.941 [2024-07-25 18:40:00.803108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.200 [2024-07-25 18:40:00.893654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.201 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:50.137 ====================================== 00:06:50.137 busy:2710766673 (cyc) 00:06:50.137 total_run_count: 291000 00:06:50.137 tsc_hz: 2700000000 (cyc) 00:06:50.137 ====================================== 00:06:50.137 poller_cost: 9315 (cyc), 3450 (nsec) 00:06:50.137 00:06:50.137 real 0m1.254s 00:06:50.137 user 0m1.171s 00:06:50.137 sys 0m0.078s 00:06:50.137 18:40:01 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.137 18:40:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.137 ************************************ 00:06:50.137 END TEST thread_poller_perf 00:06:50.137 ************************************ 00:06:50.137 18:40:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:50.137 18:40:02 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:50.137 18:40:02 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.137 18:40:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.395 ************************************ 00:06:50.395 START TEST thread_poller_perf 00:06:50.395 ************************************ 00:06:50.395 18:40:02 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:50.395 [2024-07-25 18:40:02.042452] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:50.395 [2024-07-25 18:40:02.042517] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412064 ] 00:06:50.395 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.395 [2024-07-25 18:40:02.103639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.395 [2024-07-25 18:40:02.196382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.395 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.775 ====================================== 00:06:51.775 busy:2702969276 (cyc) 00:06:51.775 total_run_count: 3851000 00:06:51.775 tsc_hz: 2700000000 (cyc) 00:06:51.775 ====================================== 00:06:51.775 poller_cost: 701 (cyc), 259 (nsec) 00:06:51.775 00:06:51.775 real 0m1.249s 00:06:51.775 user 0m1.159s 00:06:51.775 sys 0m0.084s 00:06:51.775 18:40:03 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.775 18:40:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.775 ************************************ 00:06:51.775 END TEST thread_poller_perf 00:06:51.775 ************************************ 00:06:51.775 18:40:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:51.775 00:06:51.775 real 0m2.638s 00:06:51.775 user 0m2.385s 00:06:51.775 sys 0m0.250s 00:06:51.775 18:40:03 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.775 18:40:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.775 ************************************ 00:06:51.775 END TEST thread 00:06:51.775 ************************************ 00:06:51.775 18:40:03 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:51.775 18:40:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.775 18:40:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.775 18:40:03 -- common/autotest_common.sh@10 -- # set +x 00:06:51.775 ************************************ 00:06:51.775 START TEST accel 00:06:51.775 ************************************ 00:06:51.775 18:40:03 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:51.775 * Looking for test storage... 00:06:51.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:51.775 18:40:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:51.775 18:40:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:51.775 18:40:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.775 18:40:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3412272 00:06:51.775 18:40:03 accel -- accel/accel.sh@63 -- # waitforlisten 3412272 00:06:51.775 18:40:03 accel -- common/autotest_common.sh@827 -- # '[' -z 3412272 ']' 00:06:51.775 18:40:03 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:51.775 18:40:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:51.775 18:40:03 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.775 18:40:03 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.775 18:40:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.775 18:40:03 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.775 18:40:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.775 18:40:03 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.775 18:40:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.775 18:40:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.775 18:40:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.775 18:40:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.775 18:40:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:51.775 18:40:03 accel -- accel/accel.sh@41 -- # jq -r . 00:06:51.775 [2024-07-25 18:40:03.448542] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:51.775 [2024-07-25 18:40:03.448640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412272 ] 00:06:51.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.775 [2024-07-25 18:40:03.512674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.775 [2024-07-25 18:40:03.596702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@860 -- # return 0 00:06:52.033 18:40:03 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:52.033 18:40:03 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:52.033 18:40:03 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:52.033 18:40:03 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:52.033 18:40:03 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:52.033 18:40:03 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.033 18:40:03 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # IFS== 00:06:52.033 18:40:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:52.033 18:40:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:52.033 18:40:03 accel -- accel/accel.sh@75 -- # killprocess 3412272 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@946 -- # '[' -z 3412272 ']' 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@950 -- # kill -0 3412272 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@951 -- # uname 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.033 18:40:03 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3412272 00:06:52.291 18:40:03 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.291 18:40:03 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.291 18:40:03 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3412272' 00:06:52.291 killing process with pid 3412272 00:06:52.291 18:40:03 accel -- common/autotest_common.sh@965 -- # kill 3412272 00:06:52.291 18:40:03 accel -- common/autotest_common.sh@970 -- # wait 3412272 00:06:52.548 18:40:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:52.548 18:40:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:52.548 18:40:04 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:52.548 18:40:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.548 18:40:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.548 18:40:04 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:52.548 18:40:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:52.548 18:40:04 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.548 18:40:04 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:52.548 18:40:04 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:52.548 18:40:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:52.548 18:40:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.548 18:40:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.548 ************************************ 00:06:52.548 START TEST accel_missing_filename 00:06:52.548 ************************************ 00:06:52.548 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:52.548 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:52.548 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:52.548 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:52.548 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.548 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:52.548 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.548 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:52.548 18:40:04 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:52.548 [2024-07-25 18:40:04.425592] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:52.548 [2024-07-25 18:40:04.425658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412431 ] 00:06:52.807 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.807 [2024-07-25 18:40:04.487953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.807 [2024-07-25 18:40:04.582936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.807 [2024-07-25 18:40:04.646231] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.066 [2024-07-25 18:40:04.730342] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:53.066 A filename is required. 00:06:53.066 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:53.066 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.066 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:53.066 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:53.066 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:53.066 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.066 00:06:53.066 real 0m0.408s 00:06:53.066 user 0m0.296s 00:06:53.066 sys 0m0.145s 00:06:53.066 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.066 18:40:04 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:53.066 ************************************ 00:06:53.066 END TEST accel_missing_filename 00:06:53.066 ************************************ 00:06:53.066 18:40:04 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.066 18:40:04 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:53.066 18:40:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.066 18:40:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.066 ************************************ 00:06:53.066 START TEST accel_compress_verify 00:06:53.066 ************************************ 00:06:53.066 18:40:04 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.066 18:40:04 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:53.066 18:40:04 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.066 18:40:04 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:53.066 18:40:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.066 18:40:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:53.066 18:40:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.066 18:40:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:53.066 18:40:04 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:53.066 [2024-07-25 18:40:04.885308] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:53.066 [2024-07-25 18:40:04.885366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412567 ] 00:06:53.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.326 [2024-07-25 18:40:04.946545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.326 [2024-07-25 18:40:05.035229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.326 [2024-07-25 18:40:05.090899] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.326 [2024-07-25 18:40:05.177146] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:53.585 00:06:53.585 Compression does not support the verify option, aborting. 00:06:53.585 18:40:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:53.585 18:40:05 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.585 18:40:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:53.585 18:40:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:53.585 18:40:05 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:53.585 18:40:05 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.585 00:06:53.585 real 0m0.394s 00:06:53.586 user 0m0.301s 00:06:53.586 sys 0m0.126s 00:06:53.586 18:40:05 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.586 18:40:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:53.586 ************************************ 00:06:53.586 END TEST accel_compress_verify 00:06:53.586 ************************************ 00:06:53.586 18:40:05 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.586 ************************************ 00:06:53.586 START TEST accel_wrong_workload 00:06:53.586 ************************************ 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:53.586 18:40:05 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:53.586 Unsupported workload type: foobar 00:06:53.586 [2024-07-25 18:40:05.320314] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:53.586 accel_perf options: 00:06:53.586 [-h help message] 00:06:53.586 [-q queue depth per core] 00:06:53.586 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.586 [-T number of threads per core 00:06:53.586 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.586 [-t time in seconds] 00:06:53.586 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.586 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:53.586 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.586 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.586 [-S for crc32c workload, use this seed value (default 0) 00:06:53.586 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.586 [-f for fill workload, use this BYTE value (default 255) 00:06:53.586 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.586 [-y verify result if this switch is on] 00:06:53.586 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.586 Can be used to spread operations across a wider range of memory. 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.586 00:06:53.586 real 0m0.022s 00:06:53.586 user 0m0.014s 00:06:53.586 sys 0m0.008s 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.586 18:40:05 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:53.586 ************************************ 00:06:53.586 END TEST accel_wrong_workload 00:06:53.586 ************************************ 00:06:53.586 18:40:05 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.586 Error: writing output failed: Broken pipe 00:06:53.586 ************************************ 00:06:53.586 START TEST accel_negative_buffers 00:06:53.586 ************************************ 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:53.586 18:40:05 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:53.586 -x option must be non-negative. 00:06:53.586 [2024-07-25 18:40:05.381006] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:53.586 accel_perf options: 00:06:53.586 [-h help message] 00:06:53.586 [-q queue depth per core] 00:06:53.586 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.586 [-T number of threads per core 00:06:53.586 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.586 [-t time in seconds] 00:06:53.586 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.586 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:53.586 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.586 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.586 [-S for crc32c workload, use this seed value (default 0) 00:06:53.586 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.586 [-f for fill workload, use this BYTE value (default 255) 00:06:53.586 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.586 [-y verify result if this switch is on] 00:06:53.586 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.586 Can be used to spread operations across a wider range of memory. 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.586 00:06:53.586 real 0m0.021s 00:06:53.586 user 0m0.009s 00:06:53.586 sys 0m0.012s 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.586 18:40:05 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:53.586 ************************************ 00:06:53.586 END TEST accel_negative_buffers 00:06:53.586 ************************************ 00:06:53.586 18:40:05 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.586 Error: writing output failed: Broken pipe 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.586 18:40:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.586 ************************************ 00:06:53.586 START TEST accel_crc32c 00:06:53.586 ************************************ 00:06:53.586 18:40:05 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.586 18:40:05 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.587 18:40:05 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:53.587 18:40:05 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:53.587 [2024-07-25 18:40:05.447980] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:53.587 [2024-07-25 18:40:05.448044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412644 ] 00:06:53.845 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.845 [2024-07-25 18:40:05.510854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.845 [2024-07-25 18:40:05.603617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.845 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.846 18:40:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.258 18:40:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:55.259 18:40:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.259 00:06:55.259 real 0m1.406s 00:06:55.259 user 0m1.263s 00:06:55.259 sys 0m0.146s 00:06:55.259 18:40:06 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.259 18:40:06 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:55.259 ************************************ 00:06:55.259 END TEST accel_crc32c 00:06:55.259 ************************************ 00:06:55.259 18:40:06 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:55.259 18:40:06 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:55.259 18:40:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.259 18:40:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.259 ************************************ 00:06:55.259 START TEST accel_crc32c_C2 00:06:55.259 ************************************ 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.259 18:40:06 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:55.259 [2024-07-25 18:40:06.894054] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:55.259 [2024-07-25 18:40:06.894142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412883 ] 00:06:55.259 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.259 [2024-07-25 18:40:06.956504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.259 [2024-07-25 18:40:07.049339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.259 18:40:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.637 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.638 00:06:56.638 real 0m1.408s 00:06:56.638 user 0m1.268s 00:06:56.638 sys 0m0.141s 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.638 18:40:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:56.638 ************************************ 00:06:56.638 END TEST accel_crc32c_C2 00:06:56.638 ************************************ 00:06:56.638 18:40:08 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:56.638 18:40:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.638 18:40:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.638 18:40:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.638 ************************************ 00:06:56.638 START TEST accel_copy 00:06:56.638 ************************************ 00:06:56.638 18:40:08 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:56.638 18:40:08 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:56.638 [2024-07-25 18:40:08.344831] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:56.638 [2024-07-25 18:40:08.344895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413068 ] 00:06:56.638 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.638 [2024-07-25 18:40:08.406992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.638 [2024-07-25 18:40:08.496670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.896 18:40:08 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 18:40:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:58.276 18:40:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.276 00:06:58.276 real 0m1.400s 00:06:58.276 user 0m1.252s 00:06:58.276 sys 0m0.149s 00:06:58.276 18:40:09 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.276 18:40:09 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.276 ************************************ 00:06:58.276 END TEST accel_copy 00:06:58.276 ************************************ 00:06:58.276 18:40:09 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.276 18:40:09 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:58.276 18:40:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.276 18:40:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.276 ************************************ 00:06:58.276 START TEST accel_fill 00:06:58.276 ************************************ 00:06:58.276 18:40:09 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:58.276 18:40:09 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:58.276 [2024-07-25 18:40:09.797360] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:58.276 [2024-07-25 18:40:09.797424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413226 ] 00:06:58.276 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.276 [2024-07-25 18:40:09.858964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.276 [2024-07-25 18:40:09.948660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.276 18:40:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:59.654 18:40:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.654 00:06:59.654 real 0m1.399s 00:06:59.654 user 0m1.261s 00:06:59.654 sys 0m0.140s 00:06:59.654 18:40:11 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.654 18:40:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:59.654 ************************************ 00:06:59.654 END TEST accel_fill 00:06:59.654 ************************************ 00:06:59.654 18:40:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:59.654 18:40:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:59.654 18:40:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.654 18:40:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.654 ************************************ 00:06:59.654 START TEST accel_copy_crc32c 00:06:59.654 ************************************ 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:59.654 [2024-07-25 18:40:11.247055] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:59.654 [2024-07-25 18:40:11.247156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413382 ] 00:06:59.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.654 [2024-07-25 18:40:11.308366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.654 [2024-07-25 18:40:11.404819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.654 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.655 18:40:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.034 00:07:01.034 real 0m1.398s 00:07:01.034 user 0m1.256s 00:07:01.034 sys 0m0.144s 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.034 18:40:12 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:01.034 ************************************ 00:07:01.034 END TEST accel_copy_crc32c 00:07:01.034 ************************************ 00:07:01.034 18:40:12 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:01.034 18:40:12 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:01.034 18:40:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.034 18:40:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.034 ************************************ 00:07:01.034 START TEST accel_copy_crc32c_C2 00:07:01.034 ************************************ 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:01.034 [2024-07-25 18:40:12.687575] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:01.034 [2024-07-25 18:40:12.687638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413656 ] 00:07:01.034 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.034 [2024-07-25 18:40:12.748752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.034 [2024-07-25 18:40:12.841666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.034 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.035 18:40:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.416 00:07:02.416 real 0m1.409s 00:07:02.416 user 0m1.269s 00:07:02.416 sys 0m0.141s 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.416 18:40:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:02.416 ************************************ 00:07:02.416 END TEST accel_copy_crc32c_C2 00:07:02.416 ************************************ 00:07:02.416 18:40:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:02.416 18:40:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:02.416 18:40:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.416 18:40:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.416 ************************************ 00:07:02.416 START TEST accel_dualcast 00:07:02.416 ************************************ 00:07:02.416 18:40:14 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:02.416 18:40:14 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:02.416 [2024-07-25 18:40:14.137504] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:02.416 [2024-07-25 18:40:14.137564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413811 ] 00:07:02.416 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.416 [2024-07-25 18:40:14.198270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.416 [2024-07-25 18:40:14.289798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.674 18:40:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:04.052 18:40:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.052 00:07:04.052 real 0m1.398s 00:07:04.052 user 0m1.249s 00:07:04.052 sys 0m0.149s 00:07:04.052 18:40:15 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.052 18:40:15 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:04.052 ************************************ 00:07:04.052 END TEST accel_dualcast 00:07:04.052 ************************************ 00:07:04.052 18:40:15 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:04.052 18:40:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:04.052 18:40:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.052 18:40:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.052 ************************************ 00:07:04.052 START TEST accel_compare 00:07:04.052 ************************************ 00:07:04.052 18:40:15 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:04.052 [2024-07-25 18:40:15.579769] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:04.052 [2024-07-25 18:40:15.579834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413971 ] 00:07:04.052 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.052 [2024-07-25 18:40:15.640663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.052 [2024-07-25 18:40:15.732373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.052 18:40:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.428 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.429 18:40:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.429 18:40:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:05.429 18:40:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.429 00:07:05.429 real 0m1.401s 00:07:05.429 user 0m1.264s 00:07:05.429 sys 0m0.137s 00:07:05.429 18:40:16 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.429 18:40:16 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:05.429 ************************************ 00:07:05.429 END TEST accel_compare 00:07:05.429 ************************************ 00:07:05.429 18:40:16 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:05.429 18:40:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:05.429 18:40:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.429 18:40:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.429 ************************************ 00:07:05.429 START TEST accel_xor 00:07:05.429 ************************************ 00:07:05.429 18:40:17 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:05.429 [2024-07-25 18:40:17.031552] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:05.429 [2024-07-25 18:40:17.031610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414127 ] 00:07:05.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.429 [2024-07-25 18:40:17.094287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.429 [2024-07-25 18:40:17.185892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.429 18:40:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.806 00:07:06.806 real 0m1.405s 00:07:06.806 user 0m1.261s 00:07:06.806 sys 0m0.144s 00:07:06.806 18:40:18 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.806 18:40:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:06.806 ************************************ 00:07:06.806 END TEST accel_xor 00:07:06.806 ************************************ 00:07:06.806 18:40:18 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:06.806 18:40:18 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:06.806 18:40:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.806 18:40:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.806 ************************************ 00:07:06.806 START TEST accel_xor 00:07:06.806 ************************************ 00:07:06.806 18:40:18 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:06.806 18:40:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:06.806 [2024-07-25 18:40:18.478841] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:06.807 [2024-07-25 18:40:18.478904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414401 ] 00:07:06.807 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.807 [2024-07-25 18:40:18.539609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.807 [2024-07-25 18:40:18.632216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.067 18:40:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.005 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.006 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.006 18:40:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.006 18:40:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.006 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.006 18:40:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.006 18:40:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.006 18:40:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:08.006 18:40:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.006 00:07:08.006 real 0m1.407s 00:07:08.006 user 0m1.266s 00:07:08.006 sys 0m0.142s 00:07:08.006 18:40:19 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.006 18:40:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:08.006 ************************************ 00:07:08.006 END TEST accel_xor 00:07:08.006 ************************************ 00:07:08.265 18:40:19 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:08.265 18:40:19 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:08.265 18:40:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.265 18:40:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.265 ************************************ 00:07:08.265 START TEST accel_dif_verify 00:07:08.265 ************************************ 00:07:08.265 18:40:19 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:08.265 18:40:19 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:08.265 [2024-07-25 18:40:19.927520] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:08.265 [2024-07-25 18:40:19.927583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414556 ] 00:07:08.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.265 [2024-07-25 18:40:19.988134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.265 [2024-07-25 18:40:20.100630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.525 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.526 18:40:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:09.464 18:40:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.464 00:07:09.464 real 0m1.412s 00:07:09.464 user 0m1.270s 00:07:09.464 sys 0m0.144s 00:07:09.464 18:40:21 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.464 18:40:21 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:09.464 ************************************ 00:07:09.464 END TEST accel_dif_verify 00:07:09.464 ************************************ 00:07:09.723 18:40:21 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:09.723 18:40:21 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:09.723 18:40:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.723 18:40:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.723 ************************************ 00:07:09.723 START TEST accel_dif_generate 00:07:09.723 ************************************ 00:07:09.723 18:40:21 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.723 18:40:21 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.724 18:40:21 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:09.724 18:40:21 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:09.724 [2024-07-25 18:40:21.381650] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:09.724 [2024-07-25 18:40:21.381714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414712 ] 00:07:09.724 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.724 [2024-07-25 18:40:21.445554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.724 [2024-07-25 18:40:21.538628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.724 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.724 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.724 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.724 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.724 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.983 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.984 18:40:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:10.922 18:40:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.922 00:07:10.922 real 0m1.407s 00:07:10.922 user 0m1.254s 00:07:10.922 sys 0m0.156s 00:07:10.922 18:40:22 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.922 18:40:22 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:10.922 ************************************ 00:07:10.922 END TEST accel_dif_generate 00:07:10.922 ************************************ 00:07:10.922 18:40:22 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:10.922 18:40:22 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:10.922 18:40:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.922 18:40:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.182 ************************************ 00:07:11.182 START TEST accel_dif_generate_copy 00:07:11.182 ************************************ 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:11.182 18:40:22 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:11.182 [2024-07-25 18:40:22.830787] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:11.182 [2024-07-25 18:40:22.830846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414938 ] 00:07:11.182 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.182 [2024-07-25 18:40:22.891867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.182 [2024-07-25 18:40:22.984631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.182 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.183 18:40:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.558 00:07:12.558 real 0m1.405s 00:07:12.558 user 0m1.261s 00:07:12.558 sys 0m0.145s 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.558 18:40:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:12.558 ************************************ 00:07:12.558 END TEST accel_dif_generate_copy 00:07:12.558 ************************************ 00:07:12.558 18:40:24 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:12.558 18:40:24 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.558 18:40:24 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:12.558 18:40:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.558 18:40:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.558 ************************************ 00:07:12.558 START TEST accel_comp 00:07:12.558 ************************************ 00:07:12.558 18:40:24 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:12.558 18:40:24 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:12.558 [2024-07-25 18:40:24.288023] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:12.558 [2024-07-25 18:40:24.288094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415140 ] 00:07:12.558 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.558 [2024-07-25 18:40:24.351382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.818 [2024-07-25 18:40:24.442748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:12.818 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.819 18:40:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.198 18:40:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:14.199 18:40:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.199 00:07:14.199 real 0m1.408s 00:07:14.199 user 0m1.267s 00:07:14.199 sys 0m0.143s 00:07:14.199 18:40:25 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.199 18:40:25 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:14.199 ************************************ 00:07:14.199 END TEST accel_comp 00:07:14.199 ************************************ 00:07:14.199 18:40:25 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.199 18:40:25 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:14.199 18:40:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.199 18:40:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.199 ************************************ 00:07:14.199 START TEST accel_decomp 00:07:14.199 ************************************ 00:07:14.199 18:40:25 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:14.199 [2024-07-25 18:40:25.736629] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:14.199 [2024-07-25 18:40:25.736691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415301 ] 00:07:14.199 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.199 [2024-07-25 18:40:25.798164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.199 [2024-07-25 18:40:25.891127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.199 18:40:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.581 18:40:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.581 00:07:15.581 real 0m1.406s 00:07:15.581 user 0m1.267s 00:07:15.581 sys 0m0.141s 00:07:15.581 18:40:27 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.581 18:40:27 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:15.581 ************************************ 00:07:15.581 END TEST accel_decomp 00:07:15.581 ************************************ 00:07:15.581 18:40:27 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.581 18:40:27 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:15.581 18:40:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.581 18:40:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.581 ************************************ 00:07:15.581 START TEST accel_decmop_full 00:07:15.581 ************************************ 00:07:15.581 18:40:27 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:15.581 [2024-07-25 18:40:27.183402] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:15.581 [2024-07-25 18:40:27.183473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415459 ] 00:07:15.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.581 [2024-07-25 18:40:27.243009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.581 [2024-07-25 18:40:27.338857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:15.581 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.582 18:40:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.960 18:40:28 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.960 00:07:16.960 real 0m1.405s 00:07:16.960 user 0m1.263s 00:07:16.960 sys 0m0.144s 00:07:16.960 18:40:28 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.960 18:40:28 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:16.960 ************************************ 00:07:16.961 END TEST accel_decmop_full 00:07:16.961 ************************************ 00:07:16.961 18:40:28 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.961 18:40:28 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:16.961 18:40:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.961 18:40:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.961 ************************************ 00:07:16.961 START TEST accel_decomp_mcore 00:07:16.961 ************************************ 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:16.961 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:16.961 [2024-07-25 18:40:28.633217] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:16.961 [2024-07-25 18:40:28.633273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415732 ] 00:07:16.961 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.961 [2024-07-25 18:40:28.694118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.961 [2024-07-25 18:40:28.790723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.961 [2024-07-25 18:40:28.790793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.961 [2024-07-25 18:40:28.790886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.961 [2024-07-25 18:40:28.790889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.220 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.221 18:40:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.157 00:07:18.157 real 0m1.416s 00:07:18.157 user 0m0.010s 00:07:18.157 sys 0m0.004s 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.157 18:40:30 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:18.157 ************************************ 00:07:18.157 END TEST accel_decomp_mcore 00:07:18.157 ************************************ 00:07:18.416 18:40:30 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.416 18:40:30 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:18.416 18:40:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.416 18:40:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.416 ************************************ 00:07:18.416 START TEST accel_decomp_full_mcore 00:07:18.416 ************************************ 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:18.416 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:18.416 [2024-07-25 18:40:30.097960] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:18.416 [2024-07-25 18:40:30.098023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415888 ] 00:07:18.416 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.416 [2024-07-25 18:40:30.159341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.416 [2024-07-25 18:40:30.255634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.416 [2024-07-25 18:40:30.255702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.416 [2024-07-25 18:40:30.255795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.416 [2024-07-25 18:40:30.255798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.674 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.675 18:40:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.672 00:07:19.672 real 0m1.411s 00:07:19.672 user 0m4.721s 00:07:19.672 sys 0m0.151s 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.672 18:40:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:19.672 ************************************ 00:07:19.672 END TEST accel_decomp_full_mcore 00:07:19.672 ************************************ 00:07:19.672 18:40:31 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.672 18:40:31 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:19.672 18:40:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.672 18:40:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.672 ************************************ 00:07:19.672 START TEST accel_decomp_mthread 00:07:19.672 ************************************ 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.672 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.673 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:19.673 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:19.931 [2024-07-25 18:40:31.559619] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:19.931 [2024-07-25 18:40:31.559682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416050 ] 00:07:19.931 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.931 [2024-07-25 18:40:31.620751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.931 [2024-07-25 18:40:31.711546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.931 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.932 18:40:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.307 00:07:21.307 real 0m1.411s 00:07:21.307 user 0m1.271s 00:07:21.307 sys 0m0.143s 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.307 18:40:32 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:21.307 ************************************ 00:07:21.307 END TEST accel_decomp_mthread 00:07:21.307 ************************************ 00:07:21.307 18:40:32 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.307 18:40:32 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:21.307 18:40:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.307 18:40:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.307 ************************************ 00:07:21.307 START TEST accel_decomp_full_mthread 00:07:21.307 ************************************ 00:07:21.307 18:40:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.307 18:40:32 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:21.307 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:21.307 [2024-07-25 18:40:33.016470] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:21.307 [2024-07-25 18:40:33.016540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416229 ] 00:07:21.307 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.307 [2024-07-25 18:40:33.077145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.307 [2024-07-25 18:40:33.169559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.567 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:21.568 18:40:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.948 00:07:22.948 real 0m1.447s 00:07:22.948 user 0m1.308s 00:07:22.948 sys 0m0.143s 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.948 18:40:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:22.948 ************************************ 00:07:22.948 END TEST accel_decomp_full_mthread 00:07:22.948 ************************************ 00:07:22.948 18:40:34 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:22.948 18:40:34 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:22.948 18:40:34 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:22.948 18:40:34 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:22.948 18:40:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.948 18:40:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.948 18:40:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.948 18:40:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.948 18:40:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.948 18:40:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.948 18:40:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.948 18:40:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:22.948 18:40:34 accel -- accel/accel.sh@41 -- # jq -r . 00:07:22.948 ************************************ 00:07:22.948 START TEST accel_dif_functional_tests 00:07:22.948 ************************************ 00:07:22.948 18:40:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:22.948 [2024-07-25 18:40:34.528951] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:22.948 [2024-07-25 18:40:34.529005] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416481 ] 00:07:22.948 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.948 [2024-07-25 18:40:34.588630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.948 [2024-07-25 18:40:34.683243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.948 [2024-07-25 18:40:34.683300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.948 [2024-07-25 18:40:34.683304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.948 00:07:22.948 00:07:22.948 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.948 http://cunit.sourceforge.net/ 00:07:22.948 00:07:22.948 00:07:22.948 Suite: accel_dif 00:07:22.948 Test: verify: DIF generated, GUARD check ...passed 00:07:22.948 Test: verify: DIF generated, APPTAG check ...passed 00:07:22.948 Test: verify: DIF generated, REFTAG check ...passed 00:07:22.948 Test: verify: DIF not generated, GUARD check ...[2024-07-25 18:40:34.776311] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:22.948 passed 00:07:22.948 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 18:40:34.776394] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:22.948 passed 00:07:22.948 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 18:40:34.776425] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:22.948 passed 00:07:22.948 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:22.948 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 18:40:34.776485] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:22.948 passed 00:07:22.948 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:22.948 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:22.948 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:22.948 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 18:40:34.776658] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:22.948 passed 00:07:22.948 Test: verify copy: DIF generated, GUARD check ...passed 00:07:22.948 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:22.948 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:22.948 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 18:40:34.776813] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:22.948 passed 00:07:22.948 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 18:40:34.776850] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:22.948 passed 00:07:22.948 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 18:40:34.776884] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:22.948 passed 00:07:22.948 Test: generate copy: DIF generated, GUARD check ...passed 00:07:22.948 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:22.948 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:22.948 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:22.948 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:22.948 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:22.948 Test: generate copy: iovecs-len validate ...[2024-07-25 18:40:34.777136] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:22.948 passed 00:07:22.948 Test: generate copy: buffer alignment validate ...passed 00:07:22.948 00:07:22.948 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.948 suites 1 1 n/a 0 0 00:07:22.948 tests 26 26 26 0 0 00:07:22.948 asserts 115 115 115 0 n/a 00:07:22.948 00:07:22.948 Elapsed time = 0.002 seconds 00:07:23.207 00:07:23.207 real 0m0.487s 00:07:23.207 user 0m0.748s 00:07:23.207 sys 0m0.186s 00:07:23.207 18:40:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.207 18:40:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:23.207 ************************************ 00:07:23.207 END TEST accel_dif_functional_tests 00:07:23.207 ************************************ 00:07:23.207 00:07:23.207 real 0m31.656s 00:07:23.207 user 0m35.033s 00:07:23.207 sys 0m4.567s 00:07:23.207 18:40:35 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.207 18:40:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.207 ************************************ 00:07:23.207 END TEST accel 00:07:23.207 ************************************ 00:07:23.207 18:40:35 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:23.207 18:40:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.207 18:40:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.207 18:40:35 -- common/autotest_common.sh@10 -- # set +x 00:07:23.207 ************************************ 00:07:23.207 START TEST accel_rpc 00:07:23.207 ************************************ 00:07:23.207 18:40:35 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:23.465 * Looking for test storage... 00:07:23.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:23.465 18:40:35 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:23.465 18:40:35 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3416555 00:07:23.465 18:40:35 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:23.465 18:40:35 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3416555 00:07:23.465 18:40:35 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3416555 ']' 00:07:23.465 18:40:35 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.465 18:40:35 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.465 18:40:35 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.465 18:40:35 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.465 18:40:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.465 [2024-07-25 18:40:35.146545] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:23.465 [2024-07-25 18:40:35.146626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416555 ] 00:07:23.465 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.465 [2024-07-25 18:40:35.203253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.465 [2024-07-25 18:40:35.289947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.724 18:40:35 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.724 18:40:35 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:23.724 18:40:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:23.724 18:40:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:23.724 18:40:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:23.724 18:40:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:23.724 18:40:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:23.724 18:40:35 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.724 18:40:35 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.724 18:40:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.724 ************************************ 00:07:23.724 START TEST accel_assign_opcode 00:07:23.724 ************************************ 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.724 [2024-07-25 18:40:35.378634] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.724 [2024-07-25 18:40:35.386641] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.724 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.983 software 00:07:23.983 00:07:23.983 real 0m0.292s 00:07:23.983 user 0m0.038s 00:07:23.983 sys 0m0.007s 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.983 18:40:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.983 ************************************ 00:07:23.983 END TEST accel_assign_opcode 00:07:23.983 ************************************ 00:07:23.983 18:40:35 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3416555 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3416555 ']' 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3416555 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3416555 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3416555' 00:07:23.983 killing process with pid 3416555 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@965 -- # kill 3416555 00:07:23.983 18:40:35 accel_rpc -- common/autotest_common.sh@970 -- # wait 3416555 00:07:24.551 00:07:24.551 real 0m1.077s 00:07:24.551 user 0m1.017s 00:07:24.552 sys 0m0.413s 00:07:24.552 18:40:36 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.552 18:40:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.552 ************************************ 00:07:24.552 END TEST accel_rpc 00:07:24.552 ************************************ 00:07:24.552 18:40:36 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:24.552 18:40:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.552 18:40:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.552 18:40:36 -- common/autotest_common.sh@10 -- # set +x 00:07:24.552 ************************************ 00:07:24.552 START TEST app_cmdline 00:07:24.552 ************************************ 00:07:24.552 18:40:36 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:24.552 * Looking for test storage... 00:07:24.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:24.552 18:40:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:24.552 18:40:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3416761 00:07:24.552 18:40:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:24.552 18:40:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3416761 00:07:24.552 18:40:36 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3416761 ']' 00:07:24.552 18:40:36 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.552 18:40:36 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.552 18:40:36 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.552 18:40:36 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.552 18:40:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.552 [2024-07-25 18:40:36.273627] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:24.552 [2024-07-25 18:40:36.273702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416761 ] 00:07:24.552 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.552 [2024-07-25 18:40:36.331225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.552 [2024-07-25 18:40:36.418852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.809 18:40:36 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.809 18:40:36 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:24.809 18:40:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:25.067 { 00:07:25.067 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:07:25.067 "fields": { 00:07:25.067 "major": 24, 00:07:25.067 "minor": 5, 00:07:25.067 "patch": 1, 00:07:25.067 "suffix": "-pre", 00:07:25.067 "commit": "241d0f3c9" 00:07:25.067 } 00:07:25.067 } 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:25.067 18:40:36 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:25.067 18:40:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:25.067 18:40:36 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:25.067 18:40:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.067 18:40:36 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:25.067 18:40:36 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:25.325 18:40:36 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.325 request: 00:07:25.325 { 00:07:25.325 "method": "env_dpdk_get_mem_stats", 00:07:25.325 "req_id": 1 00:07:25.325 } 00:07:25.325 Got JSON-RPC error response 00:07:25.325 response: 00:07:25.325 { 00:07:25.325 "code": -32601, 00:07:25.325 "message": "Method not found" 00:07:25.325 } 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.325 18:40:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3416761 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3416761 ']' 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3416761 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:25.325 18:40:37 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3416761 00:07:25.584 18:40:37 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:25.584 18:40:37 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:25.584 18:40:37 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3416761' 00:07:25.584 killing process with pid 3416761 00:07:25.584 18:40:37 app_cmdline -- common/autotest_common.sh@965 -- # kill 3416761 00:07:25.584 18:40:37 app_cmdline -- common/autotest_common.sh@970 -- # wait 3416761 00:07:25.842 00:07:25.842 real 0m1.421s 00:07:25.842 user 0m1.709s 00:07:25.842 sys 0m0.483s 00:07:25.842 18:40:37 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.842 18:40:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.842 ************************************ 00:07:25.842 END TEST app_cmdline 00:07:25.842 ************************************ 00:07:25.842 18:40:37 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.842 18:40:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:25.842 18:40:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.842 18:40:37 -- common/autotest_common.sh@10 -- # set +x 00:07:25.842 ************************************ 00:07:25.842 START TEST version 00:07:25.842 ************************************ 00:07:25.842 18:40:37 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.842 * Looking for test storage... 00:07:25.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.842 18:40:37 version -- app/version.sh@17 -- # get_header_version major 00:07:25.842 18:40:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.842 18:40:37 version -- app/version.sh@14 -- # cut -f2 00:07:25.842 18:40:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.842 18:40:37 version -- app/version.sh@17 -- # major=24 00:07:25.842 18:40:37 version -- app/version.sh@18 -- # get_header_version minor 00:07:25.842 18:40:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.842 18:40:37 version -- app/version.sh@14 -- # cut -f2 00:07:25.842 18:40:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.842 18:40:37 version -- app/version.sh@18 -- # minor=5 00:07:25.842 18:40:37 version -- app/version.sh@19 -- # get_header_version patch 00:07:25.842 18:40:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.842 18:40:37 version -- app/version.sh@14 -- # cut -f2 00:07:25.842 18:40:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.842 18:40:37 version -- app/version.sh@19 -- # patch=1 00:07:25.842 18:40:37 version -- app/version.sh@20 -- # get_header_version suffix 00:07:25.842 18:40:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.842 18:40:37 version -- app/version.sh@14 -- # cut -f2 00:07:25.842 18:40:37 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.842 18:40:37 version -- app/version.sh@20 -- # suffix=-pre 00:07:25.842 18:40:37 version -- app/version.sh@22 -- # version=24.5 00:07:25.842 18:40:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.842 18:40:37 version -- app/version.sh@25 -- # version=24.5.1 00:07:25.843 18:40:37 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:25.843 18:40:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:25.843 18:40:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:26.102 18:40:37 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:26.102 18:40:37 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:26.102 00:07:26.102 real 0m0.098s 00:07:26.102 user 0m0.059s 00:07:26.102 sys 0m0.058s 00:07:26.102 18:40:37 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.102 18:40:37 version -- common/autotest_common.sh@10 -- # set +x 00:07:26.102 ************************************ 00:07:26.102 END TEST version 00:07:26.102 ************************************ 00:07:26.102 18:40:37 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:26.102 18:40:37 -- spdk/autotest.sh@198 -- # uname -s 00:07:26.102 18:40:37 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:26.102 18:40:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:26.102 18:40:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:26.102 18:40:37 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:26.102 18:40:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:26.102 18:40:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:26.102 18:40:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.102 18:40:37 -- common/autotest_common.sh@10 -- # set +x 00:07:26.102 18:40:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:26.102 18:40:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:26.102 18:40:37 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:26.102 18:40:37 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:26.102 18:40:37 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:26.102 18:40:37 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:26.102 18:40:37 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:26.102 18:40:37 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:26.102 18:40:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.102 18:40:37 -- common/autotest_common.sh@10 -- # set +x 00:07:26.102 ************************************ 00:07:26.102 START TEST nvmf_tcp 00:07:26.102 ************************************ 00:07:26.102 18:40:37 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:26.102 * Looking for test storage... 00:07:26.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.102 18:40:37 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.102 18:40:37 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.102 18:40:37 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.102 18:40:37 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.102 18:40:37 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.102 18:40:37 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.102 18:40:37 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:26.102 18:40:37 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:26.102 18:40:37 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:26.102 18:40:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:26.102 18:40:37 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:26.102 18:40:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:26.102 18:40:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.102 18:40:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.102 ************************************ 00:07:26.102 START TEST nvmf_example 00:07:26.102 ************************************ 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:26.102 * Looking for test storage... 00:07:26.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.102 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.103 18:40:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:28.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:28.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:28.008 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:28.008 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.008 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.266 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.266 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.266 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.266 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:07:28.267 00:07:28.267 --- 10.0.0.2 ping statistics --- 00:07:28.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.267 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:07:28.267 00:07:28.267 --- 10.0.0.1 ping statistics --- 00:07:28.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.267 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.267 18:40:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3418774 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3418774 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3418774 ']' 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.267 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.202 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:29.202 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:29.202 18:40:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:29.202 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.202 18:40:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:29.202 18:40:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:29.462 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.444 Initializing NVMe Controllers 00:07:39.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:39.444 Initialization complete. Launching workers. 00:07:39.444 ======================================================== 00:07:39.444 Latency(us) 00:07:39.444 Device Information : IOPS MiB/s Average min max 00:07:39.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14664.36 57.28 4363.75 967.75 16295.56 00:07:39.444 ======================================================== 00:07:39.444 Total : 14664.36 57.28 4363.75 967.75 16295.56 00:07:39.444 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.705 rmmod nvme_tcp 00:07:39.705 rmmod nvme_fabrics 00:07:39.705 rmmod nvme_keyring 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3418774 ']' 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3418774 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3418774 ']' 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3418774 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3418774 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3418774' 00:07:39.705 killing process with pid 3418774 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3418774 00:07:39.705 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3418774 00:07:39.964 nvmf threads initialize successfully 00:07:39.964 bdev subsystem init successfully 00:07:39.964 created a nvmf target service 00:07:39.964 create targets's poll groups done 00:07:39.964 all subsystems of target started 00:07:39.964 nvmf target is running 00:07:39.964 all subsystems of target stopped 00:07:39.964 destroy targets's poll groups done 00:07:39.964 destroyed the nvmf target service 00:07:39.964 bdev subsystem finish successfully 00:07:39.964 nvmf threads destroy successfully 00:07:39.964 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.964 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.964 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.964 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.964 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.964 18:40:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.964 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.964 18:40:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.868 18:40:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.868 18:40:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:41.868 18:40:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.868 18:40:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.126 00:07:42.126 real 0m15.848s 00:07:42.126 user 0m45.196s 00:07:42.126 sys 0m3.274s 00:07:42.126 18:40:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.126 18:40:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.126 ************************************ 00:07:42.126 END TEST nvmf_example 00:07:42.126 ************************************ 00:07:42.126 18:40:53 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:42.126 18:40:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:42.126 18:40:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.126 18:40:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.126 ************************************ 00:07:42.126 START TEST nvmf_filesystem 00:07:42.126 ************************************ 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:42.126 * Looking for test storage... 00:07:42.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:42.126 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:42.127 #define SPDK_CONFIG_H 00:07:42.127 #define SPDK_CONFIG_APPS 1 00:07:42.127 #define SPDK_CONFIG_ARCH native 00:07:42.127 #undef SPDK_CONFIG_ASAN 00:07:42.127 #undef SPDK_CONFIG_AVAHI 00:07:42.127 #undef SPDK_CONFIG_CET 00:07:42.127 #define SPDK_CONFIG_COVERAGE 1 00:07:42.127 #define SPDK_CONFIG_CROSS_PREFIX 00:07:42.127 #undef SPDK_CONFIG_CRYPTO 00:07:42.127 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:42.127 #undef SPDK_CONFIG_CUSTOMOCF 00:07:42.127 #undef SPDK_CONFIG_DAOS 00:07:42.127 #define SPDK_CONFIG_DAOS_DIR 00:07:42.127 #define SPDK_CONFIG_DEBUG 1 00:07:42.127 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:42.127 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:42.127 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:42.127 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:42.127 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:42.127 #undef SPDK_CONFIG_DPDK_UADK 00:07:42.127 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:42.127 #define SPDK_CONFIG_EXAMPLES 1 00:07:42.127 #undef SPDK_CONFIG_FC 00:07:42.127 #define SPDK_CONFIG_FC_PATH 00:07:42.127 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:42.127 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:42.127 #undef SPDK_CONFIG_FUSE 00:07:42.127 #undef SPDK_CONFIG_FUZZER 00:07:42.127 #define SPDK_CONFIG_FUZZER_LIB 00:07:42.127 #undef SPDK_CONFIG_GOLANG 00:07:42.127 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:42.127 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:42.127 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:42.127 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:42.127 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:42.127 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:42.127 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:42.127 #define SPDK_CONFIG_IDXD 1 00:07:42.127 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:42.127 #undef SPDK_CONFIG_IPSEC_MB 00:07:42.127 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:42.127 #define SPDK_CONFIG_ISAL 1 00:07:42.127 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:42.127 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:42.127 #define SPDK_CONFIG_LIBDIR 00:07:42.127 #undef SPDK_CONFIG_LTO 00:07:42.127 #define SPDK_CONFIG_MAX_LCORES 00:07:42.127 #define SPDK_CONFIG_NVME_CUSE 1 00:07:42.127 #undef SPDK_CONFIG_OCF 00:07:42.127 #define SPDK_CONFIG_OCF_PATH 00:07:42.127 #define SPDK_CONFIG_OPENSSL_PATH 00:07:42.127 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:42.127 #define SPDK_CONFIG_PGO_DIR 00:07:42.127 #undef SPDK_CONFIG_PGO_USE 00:07:42.127 #define SPDK_CONFIG_PREFIX /usr/local 00:07:42.127 #undef SPDK_CONFIG_RAID5F 00:07:42.127 #undef SPDK_CONFIG_RBD 00:07:42.127 #define SPDK_CONFIG_RDMA 1 00:07:42.127 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:42.127 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:42.127 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:42.127 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:42.127 #define SPDK_CONFIG_SHARED 1 00:07:42.127 #undef SPDK_CONFIG_SMA 00:07:42.127 #define SPDK_CONFIG_TESTS 1 00:07:42.127 #undef SPDK_CONFIG_TSAN 00:07:42.127 #define SPDK_CONFIG_UBLK 1 00:07:42.127 #define SPDK_CONFIG_UBSAN 1 00:07:42.127 #undef SPDK_CONFIG_UNIT_TESTS 00:07:42.127 #undef SPDK_CONFIG_URING 00:07:42.127 #define SPDK_CONFIG_URING_PATH 00:07:42.127 #undef SPDK_CONFIG_URING_ZNS 00:07:42.127 #undef SPDK_CONFIG_USDT 00:07:42.127 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:42.127 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:42.127 #define SPDK_CONFIG_VFIO_USER 1 00:07:42.127 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:42.127 #define SPDK_CONFIG_VHOST 1 00:07:42.127 #define SPDK_CONFIG_VIRTIO 1 00:07:42.127 #undef SPDK_CONFIG_VTUNE 00:07:42.127 #define SPDK_CONFIG_VTUNE_DIR 00:07:42.127 #define SPDK_CONFIG_WERROR 1 00:07:42.127 #define SPDK_CONFIG_WPDK_DIR 00:07:42.127 #undef SPDK_CONFIG_XNVME 00:07:42.127 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:42.127 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:42.128 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3420487 ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3420487 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.TpzZ2R 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.TpzZ2R/tests/target /tmp/spdk.TpzZ2R 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=53496389632 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994713088 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8498323456 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30993981440 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996885504 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=471040 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:42.129 * Looking for test storage... 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=53496389632 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10712915968 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:42.129 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.130 18:40:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.094 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:44.095 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:44.095 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:44.095 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:44.095 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.095 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.355 18:40:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:07:44.355 00:07:44.355 --- 10.0.0.2 ping statistics --- 00:07:44.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.355 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:07:44.355 00:07:44.355 --- 10.0.0.1 ping statistics --- 00:07:44.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.355 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.355 ************************************ 00:07:44.355 START TEST nvmf_filesystem_no_in_capsule 00:07:44.355 ************************************ 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3422114 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3422114 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3422114 ']' 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.355 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.355 [2024-07-25 18:40:56.164463] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:44.355 [2024-07-25 18:40:56.164559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.355 [2024-07-25 18:40:56.229743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.614 [2024-07-25 18:40:56.322126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.614 [2024-07-25 18:40:56.322185] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.614 [2024-07-25 18:40:56.322214] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.614 [2024-07-25 18:40:56.322225] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.614 [2024-07-25 18:40:56.322235] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.614 [2024-07-25 18:40:56.322294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.614 [2024-07-25 18:40:56.322332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.614 [2024-07-25 18:40:56.322416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.614 [2024-07-25 18:40:56.322419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.614 [2024-07-25 18:40:56.477750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.614 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.871 Malloc1 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.871 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.872 [2024-07-25 18:40:56.665044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:44.872 { 00:07:44.872 "name": "Malloc1", 00:07:44.872 "aliases": [ 00:07:44.872 "9c425214-7091-4913-b780-8eb882a0d57a" 00:07:44.872 ], 00:07:44.872 "product_name": "Malloc disk", 00:07:44.872 "block_size": 512, 00:07:44.872 "num_blocks": 1048576, 00:07:44.872 "uuid": "9c425214-7091-4913-b780-8eb882a0d57a", 00:07:44.872 "assigned_rate_limits": { 00:07:44.872 "rw_ios_per_sec": 0, 00:07:44.872 "rw_mbytes_per_sec": 0, 00:07:44.872 "r_mbytes_per_sec": 0, 00:07:44.872 "w_mbytes_per_sec": 0 00:07:44.872 }, 00:07:44.872 "claimed": true, 00:07:44.872 "claim_type": "exclusive_write", 00:07:44.872 "zoned": false, 00:07:44.872 "supported_io_types": { 00:07:44.872 "read": true, 00:07:44.872 "write": true, 00:07:44.872 "unmap": true, 00:07:44.872 "write_zeroes": true, 00:07:44.872 "flush": true, 00:07:44.872 "reset": true, 00:07:44.872 "compare": false, 00:07:44.872 "compare_and_write": false, 00:07:44.872 "abort": true, 00:07:44.872 "nvme_admin": false, 00:07:44.872 "nvme_io": false 00:07:44.872 }, 00:07:44.872 "memory_domains": [ 00:07:44.872 { 00:07:44.872 "dma_device_id": "system", 00:07:44.872 "dma_device_type": 1 00:07:44.872 }, 00:07:44.872 { 00:07:44.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.872 "dma_device_type": 2 00:07:44.872 } 00:07:44.872 ], 00:07:44.872 "driver_specific": {} 00:07:44.872 } 00:07:44.872 ]' 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:44.872 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:45.130 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:45.130 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:45.130 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:45.130 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:45.130 18:40:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.698 18:40:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.698 18:40:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:45.698 18:40:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.698 18:40:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:45.699 18:40:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.602 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:47.862 18:40:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:48.429 18:41:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.368 ************************************ 00:07:49.368 START TEST filesystem_ext4 00:07:49.368 ************************************ 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:49.368 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.368 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.626 Discarding device blocks: 0/522240 done 00:07:49.626 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.626 Filesystem UUID: 0f63f245-fd1d-4a06-907d-0d5ba3f7696e 00:07:49.626 Superblock backups stored on blocks: 00:07:49.626 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.626 00:07:49.626 Allocating group tables: 0/64 done 00:07:49.626 Writing inode tables: 0/64 done 00:07:49.885 Creating journal (8192 blocks): done 00:07:49.885 Writing superblocks and filesystem accounting information: 0/64 done 00:07:49.885 00:07:49.885 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:49.885 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3422114 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.145 00:07:50.145 real 0m0.706s 00:07:50.145 user 0m0.018s 00:07:50.145 sys 0m0.057s 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:50.145 ************************************ 00:07:50.145 END TEST filesystem_ext4 00:07:50.145 ************************************ 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.145 ************************************ 00:07:50.145 START TEST filesystem_btrfs 00:07:50.145 ************************************ 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:50.145 18:41:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:50.405 btrfs-progs v6.6.2 00:07:50.405 See https://btrfs.readthedocs.io for more information. 00:07:50.405 00:07:50.405 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:50.405 NOTE: several default settings have changed in version 5.15, please make sure 00:07:50.405 this does not affect your deployments: 00:07:50.405 - DUP for metadata (-m dup) 00:07:50.405 - enabled no-holes (-O no-holes) 00:07:50.405 - enabled free-space-tree (-R free-space-tree) 00:07:50.405 00:07:50.405 Label: (null) 00:07:50.405 UUID: 8ef0e7af-f64a-44b7-9949-1f422c5b6e04 00:07:50.405 Node size: 16384 00:07:50.405 Sector size: 4096 00:07:50.405 Filesystem size: 510.00MiB 00:07:50.405 Block group profiles: 00:07:50.405 Data: single 8.00MiB 00:07:50.405 Metadata: DUP 32.00MiB 00:07:50.405 System: DUP 8.00MiB 00:07:50.405 SSD detected: yes 00:07:50.405 Zoned device: no 00:07:50.405 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:50.405 Runtime features: free-space-tree 00:07:50.405 Checksum: crc32c 00:07:50.405 Number of devices: 1 00:07:50.405 Devices: 00:07:50.405 ID SIZE PATH 00:07:50.405 1 510.00MiB /dev/nvme0n1p1 00:07:50.405 00:07:50.405 18:41:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:50.405 18:41:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.345 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3422114 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.346 00:07:51.346 real 0m1.142s 00:07:51.346 user 0m0.018s 00:07:51.346 sys 0m0.112s 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:51.346 ************************************ 00:07:51.346 END TEST filesystem_btrfs 00:07:51.346 ************************************ 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.346 ************************************ 00:07:51.346 START TEST filesystem_xfs 00:07:51.346 ************************************ 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:51.346 18:41:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:51.606 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:51.606 = sectsz=512 attr=2, projid32bit=1 00:07:51.606 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:51.606 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:51.606 data = bsize=4096 blocks=130560, imaxpct=25 00:07:51.606 = sunit=0 swidth=0 blks 00:07:51.606 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:51.606 log =internal log bsize=4096 blocks=16384, version=2 00:07:51.606 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:51.606 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:52.544 Discarding blocks...Done. 00:07:52.544 18:41:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:52.544 18:41:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3422114 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.079 00:07:55.079 real 0m3.728s 00:07:55.079 user 0m0.019s 00:07:55.079 sys 0m0.059s 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:55.079 ************************************ 00:07:55.079 END TEST filesystem_xfs 00:07:55.079 ************************************ 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:55.079 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:55.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.338 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:55.338 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:55.338 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:55.338 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.338 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:55.338 18:41:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3422114 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3422114 ']' 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3422114 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3422114 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3422114' 00:07:55.338 killing process with pid 3422114 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3422114 00:07:55.338 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3422114 00:07:55.597 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:55.597 00:07:55.597 real 0m11.343s 00:07:55.597 user 0m43.588s 00:07:55.597 sys 0m1.713s 00:07:55.597 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.597 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.597 ************************************ 00:07:55.597 END TEST nvmf_filesystem_no_in_capsule 00:07:55.597 ************************************ 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.858 ************************************ 00:07:55.858 START TEST nvmf_filesystem_in_capsule 00:07:55.858 ************************************ 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3423667 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3423667 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3423667 ']' 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:55.858 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.858 [2024-07-25 18:41:07.561330] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:55.858 [2024-07-25 18:41:07.561426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.858 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.858 [2024-07-25 18:41:07.625446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.858 [2024-07-25 18:41:07.715315] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.858 [2024-07-25 18:41:07.715389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.858 [2024-07-25 18:41:07.715403] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.858 [2024-07-25 18:41:07.715414] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.858 [2024-07-25 18:41:07.715424] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.858 [2024-07-25 18:41:07.715492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.858 [2024-07-25 18:41:07.715553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.858 [2024-07-25 18:41:07.715619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.858 [2024-07-25 18:41:07.715622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.117 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.117 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:56.117 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.118 [2024-07-25 18:41:07.869845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.118 18:41:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.378 Malloc1 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.378 [2024-07-25 18:41:08.053381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:56.378 { 00:07:56.378 "name": "Malloc1", 00:07:56.378 "aliases": [ 00:07:56.378 "367954db-b5b8-4bd9-a7ff-ea16159000ac" 00:07:56.378 ], 00:07:56.378 "product_name": "Malloc disk", 00:07:56.378 "block_size": 512, 00:07:56.378 "num_blocks": 1048576, 00:07:56.378 "uuid": "367954db-b5b8-4bd9-a7ff-ea16159000ac", 00:07:56.378 "assigned_rate_limits": { 00:07:56.378 "rw_ios_per_sec": 0, 00:07:56.378 "rw_mbytes_per_sec": 0, 00:07:56.378 "r_mbytes_per_sec": 0, 00:07:56.378 "w_mbytes_per_sec": 0 00:07:56.378 }, 00:07:56.378 "claimed": true, 00:07:56.378 "claim_type": "exclusive_write", 00:07:56.378 "zoned": false, 00:07:56.378 "supported_io_types": { 00:07:56.378 "read": true, 00:07:56.378 "write": true, 00:07:56.378 "unmap": true, 00:07:56.378 "write_zeroes": true, 00:07:56.378 "flush": true, 00:07:56.378 "reset": true, 00:07:56.378 "compare": false, 00:07:56.378 "compare_and_write": false, 00:07:56.378 "abort": true, 00:07:56.378 "nvme_admin": false, 00:07:56.378 "nvme_io": false 00:07:56.378 }, 00:07:56.378 "memory_domains": [ 00:07:56.378 { 00:07:56.378 "dma_device_id": "system", 00:07:56.378 "dma_device_type": 1 00:07:56.378 }, 00:07:56.378 { 00:07:56.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.378 "dma_device_type": 2 00:07:56.378 } 00:07:56.378 ], 00:07:56.378 "driver_specific": {} 00:07:56.378 } 00:07:56.378 ]' 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:56.378 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.946 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.946 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:56.946 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.946 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:56.946 18:41:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:59.484 18:41:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:59.484 18:41:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:00.050 18:41:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.431 ************************************ 00:08:01.431 START TEST filesystem_in_capsule_ext4 00:08:01.431 ************************************ 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:01.431 18:41:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:01.431 mke2fs 1.46.5 (30-Dec-2021) 00:08:01.431 Discarding device blocks: 0/522240 done 00:08:01.431 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:01.431 Filesystem UUID: b1eb8a07-b662-4e80-aefb-93bc5e13071d 00:08:01.431 Superblock backups stored on blocks: 00:08:01.431 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:01.431 00:08:01.431 Allocating group tables: 0/64 done 00:08:01.431 Writing inode tables: 0/64 done 00:08:03.334 Creating journal (8192 blocks): done 00:08:03.334 Writing superblocks and filesystem accounting information: 0/64 done 00:08:03.334 00:08:03.334 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:03.334 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.903 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3423667 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.163 00:08:04.163 real 0m2.896s 00:08:04.163 user 0m0.021s 00:08:04.163 sys 0m0.057s 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:04.163 ************************************ 00:08:04.163 END TEST filesystem_in_capsule_ext4 00:08:04.163 ************************************ 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.163 ************************************ 00:08:04.163 START TEST filesystem_in_capsule_btrfs 00:08:04.163 ************************************ 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:04.163 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:04.164 18:41:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:04.423 btrfs-progs v6.6.2 00:08:04.423 See https://btrfs.readthedocs.io for more information. 00:08:04.423 00:08:04.423 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:04.423 NOTE: several default settings have changed in version 5.15, please make sure 00:08:04.423 this does not affect your deployments: 00:08:04.423 - DUP for metadata (-m dup) 00:08:04.423 - enabled no-holes (-O no-holes) 00:08:04.423 - enabled free-space-tree (-R free-space-tree) 00:08:04.423 00:08:04.423 Label: (null) 00:08:04.423 UUID: c4d32aaa-a94d-469f-a94c-7a02291a7d56 00:08:04.423 Node size: 16384 00:08:04.423 Sector size: 4096 00:08:04.423 Filesystem size: 510.00MiB 00:08:04.423 Block group profiles: 00:08:04.423 Data: single 8.00MiB 00:08:04.423 Metadata: DUP 32.00MiB 00:08:04.423 System: DUP 8.00MiB 00:08:04.423 SSD detected: yes 00:08:04.423 Zoned device: no 00:08:04.423 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:04.423 Runtime features: free-space-tree 00:08:04.423 Checksum: crc32c 00:08:04.423 Number of devices: 1 00:08:04.423 Devices: 00:08:04.423 ID SIZE PATH 00:08:04.423 1 510.00MiB /dev/nvme0n1p1 00:08:04.423 00:08:04.423 18:41:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:04.423 18:41:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.385 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.385 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3423667 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.386 00:08:05.386 real 0m1.223s 00:08:05.386 user 0m0.031s 00:08:05.386 sys 0m0.111s 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:05.386 ************************************ 00:08:05.386 END TEST filesystem_in_capsule_btrfs 00:08:05.386 ************************************ 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.386 ************************************ 00:08:05.386 START TEST filesystem_in_capsule_xfs 00:08:05.386 ************************************ 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:05.386 18:41:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:05.645 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:05.645 = sectsz=512 attr=2, projid32bit=1 00:08:05.645 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:05.645 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:05.645 data = bsize=4096 blocks=130560, imaxpct=25 00:08:05.645 = sunit=0 swidth=0 blks 00:08:05.645 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:05.645 log =internal log bsize=4096 blocks=16384, version=2 00:08:05.645 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:05.645 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:06.582 Discarding blocks...Done. 00:08:06.582 18:41:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:06.582 18:41:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3423667 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.115 00:08:09.115 real 0m3.355s 00:08:09.115 user 0m0.021s 00:08:09.115 sys 0m0.057s 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.115 ************************************ 00:08:09.115 END TEST filesystem_in_capsule_xfs 00:08:09.115 ************************************ 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3423667 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3423667 ']' 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3423667 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:09.115 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3423667 00:08:09.375 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:09.375 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:09.375 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3423667' 00:08:09.375 killing process with pid 3423667 00:08:09.375 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3423667 00:08:09.375 18:41:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3423667 00:08:09.634 18:41:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:09.634 00:08:09.634 real 0m13.926s 00:08:09.634 user 0m53.645s 00:08:09.634 sys 0m1.937s 00:08:09.634 18:41:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.634 18:41:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.634 ************************************ 00:08:09.634 END TEST nvmf_filesystem_in_capsule 00:08:09.634 ************************************ 00:08:09.634 18:41:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:09.634 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.635 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:09.635 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.635 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:09.635 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.635 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.635 rmmod nvme_tcp 00:08:09.635 rmmod nvme_fabrics 00:08:09.635 rmmod nvme_keyring 00:08:09.635 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.894 18:41:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.799 18:41:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:11.799 00:08:11.799 real 0m29.765s 00:08:11.799 user 1m38.140s 00:08:11.799 sys 0m5.238s 00:08:11.799 18:41:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.799 18:41:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.799 ************************************ 00:08:11.799 END TEST nvmf_filesystem 00:08:11.799 ************************************ 00:08:11.799 18:41:23 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:11.799 18:41:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:11.799 18:41:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.799 18:41:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.799 ************************************ 00:08:11.799 START TEST nvmf_target_discovery 00:08:11.799 ************************************ 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:11.799 * Looking for test storage... 00:08:11.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.799 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.058 18:41:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:13.967 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:13.967 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:13.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:13.967 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:08:13.967 00:08:13.967 --- 10.0.0.2 ping statistics --- 00:08:13.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.967 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:13.967 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:08:13.968 00:08:13.968 --- 10.0.0.1 ping statistics --- 00:08:13.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.968 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3427413 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3427413 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3427413 ']' 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:13.968 18:41:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.968 [2024-07-25 18:41:25.773110] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:13.968 [2024-07-25 18:41:25.773194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.968 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.227 [2024-07-25 18:41:25.844784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.227 [2024-07-25 18:41:25.934674] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.227 [2024-07-25 18:41:25.934738] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.227 [2024-07-25 18:41:25.934767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.227 [2024-07-25 18:41:25.934778] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.227 [2024-07-25 18:41:25.934787] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.227 [2024-07-25 18:41:25.934875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.227 [2024-07-25 18:41:25.934940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.227 [2024-07-25 18:41:25.935006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.227 [2024-07-25 18:41:25.935009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.227 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.227 [2024-07-25 18:41:26.086831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.228 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.228 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:14.228 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.228 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:14.228 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.228 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 Null1 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 [2024-07-25 18:41:26.127200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 Null2 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 Null3 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 Null4 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.487 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:14.746 00:08:14.746 Discovery Log Number of Records 6, Generation counter 6 00:08:14.746 =====Discovery Log Entry 0====== 00:08:14.746 trtype: tcp 00:08:14.746 adrfam: ipv4 00:08:14.746 subtype: current discovery subsystem 00:08:14.746 treq: not required 00:08:14.746 portid: 0 00:08:14.746 trsvcid: 4420 00:08:14.746 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:14.746 traddr: 10.0.0.2 00:08:14.746 eflags: explicit discovery connections, duplicate discovery information 00:08:14.746 sectype: none 00:08:14.746 =====Discovery Log Entry 1====== 00:08:14.746 trtype: tcp 00:08:14.746 adrfam: ipv4 00:08:14.746 subtype: nvme subsystem 00:08:14.746 treq: not required 00:08:14.746 portid: 0 00:08:14.746 trsvcid: 4420 00:08:14.746 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:14.746 traddr: 10.0.0.2 00:08:14.746 eflags: none 00:08:14.746 sectype: none 00:08:14.746 =====Discovery Log Entry 2====== 00:08:14.746 trtype: tcp 00:08:14.746 adrfam: ipv4 00:08:14.746 subtype: nvme subsystem 00:08:14.746 treq: not required 00:08:14.746 portid: 0 00:08:14.746 trsvcid: 4420 00:08:14.746 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:14.746 traddr: 10.0.0.2 00:08:14.746 eflags: none 00:08:14.746 sectype: none 00:08:14.746 =====Discovery Log Entry 3====== 00:08:14.746 trtype: tcp 00:08:14.746 adrfam: ipv4 00:08:14.746 subtype: nvme subsystem 00:08:14.746 treq: not required 00:08:14.746 portid: 0 00:08:14.746 trsvcid: 4420 00:08:14.746 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:14.746 traddr: 10.0.0.2 00:08:14.746 eflags: none 00:08:14.746 sectype: none 00:08:14.746 =====Discovery Log Entry 4====== 00:08:14.746 trtype: tcp 00:08:14.746 adrfam: ipv4 00:08:14.746 subtype: nvme subsystem 00:08:14.746 treq: not required 00:08:14.746 portid: 0 00:08:14.746 trsvcid: 4420 00:08:14.746 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:14.746 traddr: 10.0.0.2 00:08:14.746 eflags: none 00:08:14.746 sectype: none 00:08:14.746 =====Discovery Log Entry 5====== 00:08:14.746 trtype: tcp 00:08:14.746 adrfam: ipv4 00:08:14.746 subtype: discovery subsystem referral 00:08:14.746 treq: not required 00:08:14.746 portid: 0 00:08:14.746 trsvcid: 4430 00:08:14.746 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:14.746 traddr: 10.0.0.2 00:08:14.746 eflags: none 00:08:14.746 sectype: none 00:08:14.746 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:14.746 Perform nvmf subsystem discovery via RPC 00:08:14.746 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:14.746 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.746 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.746 [ 00:08:14.746 { 00:08:14.746 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:14.746 "subtype": "Discovery", 00:08:14.746 "listen_addresses": [ 00:08:14.746 { 00:08:14.746 "trtype": "TCP", 00:08:14.746 "adrfam": "IPv4", 00:08:14.746 "traddr": "10.0.0.2", 00:08:14.746 "trsvcid": "4420" 00:08:14.746 } 00:08:14.746 ], 00:08:14.746 "allow_any_host": true, 00:08:14.746 "hosts": [] 00:08:14.746 }, 00:08:14.746 { 00:08:14.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.746 "subtype": "NVMe", 00:08:14.746 "listen_addresses": [ 00:08:14.746 { 00:08:14.746 "trtype": "TCP", 00:08:14.746 "adrfam": "IPv4", 00:08:14.746 "traddr": "10.0.0.2", 00:08:14.746 "trsvcid": "4420" 00:08:14.746 } 00:08:14.746 ], 00:08:14.746 "allow_any_host": true, 00:08:14.746 "hosts": [], 00:08:14.746 "serial_number": "SPDK00000000000001", 00:08:14.746 "model_number": "SPDK bdev Controller", 00:08:14.746 "max_namespaces": 32, 00:08:14.746 "min_cntlid": 1, 00:08:14.746 "max_cntlid": 65519, 00:08:14.746 "namespaces": [ 00:08:14.746 { 00:08:14.746 "nsid": 1, 00:08:14.746 "bdev_name": "Null1", 00:08:14.746 "name": "Null1", 00:08:14.746 "nguid": "8E297B269E2F4B70B4247BA1BE8D5F2C", 00:08:14.746 "uuid": "8e297b26-9e2f-4b70-b424-7ba1be8d5f2c" 00:08:14.746 } 00:08:14.746 ] 00:08:14.746 }, 00:08:14.746 { 00:08:14.746 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:14.746 "subtype": "NVMe", 00:08:14.746 "listen_addresses": [ 00:08:14.746 { 00:08:14.746 "trtype": "TCP", 00:08:14.746 "adrfam": "IPv4", 00:08:14.746 "traddr": "10.0.0.2", 00:08:14.746 "trsvcid": "4420" 00:08:14.746 } 00:08:14.746 ], 00:08:14.746 "allow_any_host": true, 00:08:14.746 "hosts": [], 00:08:14.746 "serial_number": "SPDK00000000000002", 00:08:14.746 "model_number": "SPDK bdev Controller", 00:08:14.746 "max_namespaces": 32, 00:08:14.746 "min_cntlid": 1, 00:08:14.746 "max_cntlid": 65519, 00:08:14.746 "namespaces": [ 00:08:14.746 { 00:08:14.746 "nsid": 1, 00:08:14.746 "bdev_name": "Null2", 00:08:14.746 "name": "Null2", 00:08:14.746 "nguid": "506C5662988247A0A83C27FF2089722D", 00:08:14.746 "uuid": "506c5662-9882-47a0-a83c-27ff2089722d" 00:08:14.746 } 00:08:14.746 ] 00:08:14.746 }, 00:08:14.746 { 00:08:14.746 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:14.746 "subtype": "NVMe", 00:08:14.746 "listen_addresses": [ 00:08:14.746 { 00:08:14.746 "trtype": "TCP", 00:08:14.746 "adrfam": "IPv4", 00:08:14.746 "traddr": "10.0.0.2", 00:08:14.746 "trsvcid": "4420" 00:08:14.746 } 00:08:14.746 ], 00:08:14.746 "allow_any_host": true, 00:08:14.746 "hosts": [], 00:08:14.746 "serial_number": "SPDK00000000000003", 00:08:14.746 "model_number": "SPDK bdev Controller", 00:08:14.746 "max_namespaces": 32, 00:08:14.746 "min_cntlid": 1, 00:08:14.747 "max_cntlid": 65519, 00:08:14.747 "namespaces": [ 00:08:14.747 { 00:08:14.747 "nsid": 1, 00:08:14.747 "bdev_name": "Null3", 00:08:14.747 "name": "Null3", 00:08:14.747 "nguid": "7D17D83BE7D8480F8ECB5D0E1C3E3A74", 00:08:14.747 "uuid": "7d17d83b-e7d8-480f-8ecb-5d0e1c3e3a74" 00:08:14.747 } 00:08:14.747 ] 00:08:14.747 }, 00:08:14.747 { 00:08:14.747 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:14.747 "subtype": "NVMe", 00:08:14.747 "listen_addresses": [ 00:08:14.747 { 00:08:14.747 "trtype": "TCP", 00:08:14.747 "adrfam": "IPv4", 00:08:14.747 "traddr": "10.0.0.2", 00:08:14.747 "trsvcid": "4420" 00:08:14.747 } 00:08:14.747 ], 00:08:14.747 "allow_any_host": true, 00:08:14.747 "hosts": [], 00:08:14.747 "serial_number": "SPDK00000000000004", 00:08:14.747 "model_number": "SPDK bdev Controller", 00:08:14.747 "max_namespaces": 32, 00:08:14.747 "min_cntlid": 1, 00:08:14.747 "max_cntlid": 65519, 00:08:14.747 "namespaces": [ 00:08:14.747 { 00:08:14.747 "nsid": 1, 00:08:14.747 "bdev_name": "Null4", 00:08:14.747 "name": "Null4", 00:08:14.747 "nguid": "EBD78D2263AE4D7CA69D4D618BA53D4D", 00:08:14.747 "uuid": "ebd78d22-63ae-4d7c-a69d-4d618ba53d4d" 00:08:14.747 } 00:08:14.747 ] 00:08:14.747 } 00:08:14.747 ] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.747 rmmod nvme_tcp 00:08:14.747 rmmod nvme_fabrics 00:08:14.747 rmmod nvme_keyring 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3427413 ']' 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3427413 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3427413 ']' 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3427413 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3427413 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3427413' 00:08:14.747 killing process with pid 3427413 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3427413 00:08:14.747 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3427413 00:08:15.006 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.006 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.006 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.006 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.006 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.006 18:41:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.006 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.006 18:41:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.544 18:41:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.544 00:08:17.544 real 0m5.249s 00:08:17.544 user 0m4.219s 00:08:17.544 sys 0m1.767s 00:08:17.544 18:41:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:17.544 18:41:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.544 ************************************ 00:08:17.544 END TEST nvmf_target_discovery 00:08:17.544 ************************************ 00:08:17.544 18:41:28 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:17.544 18:41:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:17.544 18:41:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:17.544 18:41:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.544 ************************************ 00:08:17.544 START TEST nvmf_referrals 00:08:17.544 ************************************ 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:17.544 * Looking for test storage... 00:08:17.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.544 18:41:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.545 18:41:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.450 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.451 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.451 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.451 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.451 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.451 18:41:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:08:19.451 00:08:19.451 --- 10.0.0.2 ping statistics --- 00:08:19.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.451 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:08:19.451 00:08:19.451 --- 10.0.0.1 ping statistics --- 00:08:19.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.451 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3429390 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3429390 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3429390 ']' 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.451 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.451 [2024-07-25 18:41:31.141463] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:19.451 [2024-07-25 18:41:31.141542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.451 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.451 [2024-07-25 18:41:31.207490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.451 [2024-07-25 18:41:31.295610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.452 [2024-07-25 18:41:31.295674] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.452 [2024-07-25 18:41:31.295702] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.452 [2024-07-25 18:41:31.295714] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.452 [2024-07-25 18:41:31.295723] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.452 [2024-07-25 18:41:31.295816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.452 [2024-07-25 18:41:31.295882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.452 [2024-07-25 18:41:31.295933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.452 [2024-07-25 18:41:31.295936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 [2024-07-25 18:41:31.448752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 [2024-07-25 18:41:31.461008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.724 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.982 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.241 18:41:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.241 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:20.241 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:20.241 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:20.241 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:20.241 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:20.241 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.241 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:20.500 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:20.500 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:20.500 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:20.500 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:20.500 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.500 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:20.500 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:20.500 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:20.758 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.759 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.017 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.278 18:41:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.278 rmmod nvme_tcp 00:08:21.278 rmmod nvme_fabrics 00:08:21.278 rmmod nvme_keyring 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3429390 ']' 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3429390 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3429390 ']' 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3429390 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3429390 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3429390' 00:08:21.278 killing process with pid 3429390 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3429390 00:08:21.278 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3429390 00:08:21.538 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.538 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.538 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.538 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.538 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.538 18:41:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.538 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.538 18:41:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.069 18:41:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.069 00:08:24.069 real 0m6.446s 00:08:24.069 user 0m9.294s 00:08:24.069 sys 0m2.095s 00:08:24.069 18:41:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.069 18:41:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.069 ************************************ 00:08:24.069 END TEST nvmf_referrals 00:08:24.069 ************************************ 00:08:24.069 18:41:35 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:24.069 18:41:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:24.069 18:41:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.069 18:41:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.069 ************************************ 00:08:24.069 START TEST nvmf_connect_disconnect 00:08:24.069 ************************************ 00:08:24.069 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:24.069 * Looking for test storage... 00:08:24.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.070 18:41:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.976 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:08:25.977 00:08:25.977 --- 10.0.0.2 ping statistics --- 00:08:25.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.977 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:08:25.977 00:08:25.977 --- 10.0.0.1 ping statistics --- 00:08:25.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.977 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3431679 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3431679 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3431679 ']' 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:25.977 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.977 [2024-07-25 18:41:37.652265] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:25.977 [2024-07-25 18:41:37.652347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.977 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.977 [2024-07-25 18:41:37.721486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.977 [2024-07-25 18:41:37.814927] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.977 [2024-07-25 18:41:37.814990] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.977 [2024-07-25 18:41:37.815007] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.977 [2024-07-25 18:41:37.815021] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.977 [2024-07-25 18:41:37.815033] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.977 [2024-07-25 18:41:37.815097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.977 [2024-07-25 18:41:37.815154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.977 [2024-07-25 18:41:37.815204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.977 [2024-07-25 18:41:37.815207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.263 [2024-07-25 18:41:37.979033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.263 18:41:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.263 [2024-07-25 18:41:38.030476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:26.263 18:41:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:28.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.759 rmmod nvme_tcp 00:12:16.759 rmmod nvme_fabrics 00:12:16.759 rmmod nvme_keyring 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3431679 ']' 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3431679 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3431679 ']' 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3431679 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3431679 00:12:16.759 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:16.760 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:16.760 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3431679' 00:12:16.760 killing process with pid 3431679 00:12:16.760 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3431679 00:12:16.760 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3431679 00:12:17.019 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.019 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.019 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.019 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.019 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.019 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.019 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.019 18:45:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.927 18:45:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.927 00:12:18.927 real 3m55.349s 00:12:18.927 user 14m56.549s 00:12:18.927 sys 0m34.191s 00:12:18.927 18:45:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.927 18:45:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.927 ************************************ 00:12:18.927 END TEST nvmf_connect_disconnect 00:12:18.927 ************************************ 00:12:18.927 18:45:30 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:18.927 18:45:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:18.927 18:45:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.927 18:45:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.185 ************************************ 00:12:19.185 START TEST nvmf_multitarget 00:12:19.185 ************************************ 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.185 * Looking for test storage... 00:12:19.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.185 18:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.203 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:21.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:21.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:21.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:21.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:12:21.204 00:12:21.204 --- 10.0.0.2 ping statistics --- 00:12:21.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.204 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:12:21.204 00:12:21.204 --- 10.0.0.1 ping statistics --- 00:12:21.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.204 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3463373 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3463373 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3463373 ']' 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:21.204 18:45:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.204 [2024-07-25 18:45:32.936970] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:21.205 [2024-07-25 18:45:32.937051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.205 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.205 [2024-07-25 18:45:33.006411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.462 [2024-07-25 18:45:33.101499] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.462 [2024-07-25 18:45:33.101558] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.462 [2024-07-25 18:45:33.101574] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.462 [2024-07-25 18:45:33.101588] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.462 [2024-07-25 18:45:33.101600] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.462 [2024-07-25 18:45:33.101696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.462 [2024-07-25 18:45:33.101750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.462 [2024-07-25 18:45:33.101803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.462 [2024-07-25 18:45:33.101806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.462 18:45:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:21.462 18:45:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:21.462 18:45:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.462 18:45:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.462 18:45:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.462 18:45:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.463 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:21.463 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.463 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:21.720 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:21.720 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:21.720 "nvmf_tgt_1" 00:12:21.720 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:21.720 "nvmf_tgt_2" 00:12:21.979 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.979 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:21.979 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:21.979 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:21.979 true 00:12:21.979 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:22.237 true 00:12:22.237 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.237 18:45:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.237 rmmod nvme_tcp 00:12:22.237 rmmod nvme_fabrics 00:12:22.237 rmmod nvme_keyring 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3463373 ']' 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3463373 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3463373 ']' 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3463373 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:22.237 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3463373 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3463373' 00:12:22.496 killing process with pid 3463373 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3463373 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3463373 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.496 18:45:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.031 18:45:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:25.031 00:12:25.031 real 0m5.596s 00:12:25.031 user 0m6.373s 00:12:25.031 sys 0m1.792s 00:12:25.031 18:45:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:25.031 18:45:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.031 ************************************ 00:12:25.031 END TEST nvmf_multitarget 00:12:25.031 ************************************ 00:12:25.031 18:45:36 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:25.031 18:45:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:25.031 18:45:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:25.031 18:45:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.031 ************************************ 00:12:25.031 START TEST nvmf_rpc 00:12:25.031 ************************************ 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:25.031 * Looking for test storage... 00:12:25.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.031 18:45:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.032 18:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.936 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:26.937 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:26.937 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:26.937 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:26.937 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:12:26.937 00:12:26.937 --- 10.0.0.2 ping statistics --- 00:12:26.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.937 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:12:26.937 00:12:26.937 --- 10.0.0.1 ping statistics --- 00:12:26.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.937 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3465467 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3465467 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3465467 ']' 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.937 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.937 [2024-07-25 18:45:38.714911] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:26.937 [2024-07-25 18:45:38.715011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.937 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.937 [2024-07-25 18:45:38.784801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.196 [2024-07-25 18:45:38.874900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.196 [2024-07-25 18:45:38.874957] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.196 [2024-07-25 18:45:38.874986] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.196 [2024-07-25 18:45:38.874998] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.196 [2024-07-25 18:45:38.875007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.196 [2024-07-25 18:45:38.875092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.196 [2024-07-25 18:45:38.875131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.196 [2024-07-25 18:45:38.875158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.196 [2024-07-25 18:45:38.875161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.196 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.196 18:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:27.196 18:45:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:27.196 "tick_rate": 2700000000, 00:12:27.196 "poll_groups": [ 00:12:27.196 { 00:12:27.196 "name": "nvmf_tgt_poll_group_000", 00:12:27.196 "admin_qpairs": 0, 00:12:27.196 "io_qpairs": 0, 00:12:27.196 "current_admin_qpairs": 0, 00:12:27.196 "current_io_qpairs": 0, 00:12:27.196 "pending_bdev_io": 0, 00:12:27.196 "completed_nvme_io": 0, 00:12:27.196 "transports": [] 00:12:27.196 }, 00:12:27.196 { 00:12:27.196 "name": "nvmf_tgt_poll_group_001", 00:12:27.196 "admin_qpairs": 0, 00:12:27.196 "io_qpairs": 0, 00:12:27.196 "current_admin_qpairs": 0, 00:12:27.196 "current_io_qpairs": 0, 00:12:27.196 "pending_bdev_io": 0, 00:12:27.196 "completed_nvme_io": 0, 00:12:27.196 "transports": [] 00:12:27.196 }, 00:12:27.196 { 00:12:27.196 "name": "nvmf_tgt_poll_group_002", 00:12:27.196 "admin_qpairs": 0, 00:12:27.196 "io_qpairs": 0, 00:12:27.196 "current_admin_qpairs": 0, 00:12:27.196 "current_io_qpairs": 0, 00:12:27.196 "pending_bdev_io": 0, 00:12:27.196 "completed_nvme_io": 0, 00:12:27.196 "transports": [] 00:12:27.196 }, 00:12:27.196 { 00:12:27.196 "name": "nvmf_tgt_poll_group_003", 00:12:27.196 "admin_qpairs": 0, 00:12:27.196 "io_qpairs": 0, 00:12:27.196 "current_admin_qpairs": 0, 00:12:27.196 "current_io_qpairs": 0, 00:12:27.196 "pending_bdev_io": 0, 00:12:27.196 "completed_nvme_io": 0, 00:12:27.196 "transports": [] 00:12:27.196 } 00:12:27.196 ] 00:12:27.196 }' 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:27.196 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.454 [2024-07-25 18:45:39.114113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:27.454 "tick_rate": 2700000000, 00:12:27.454 "poll_groups": [ 00:12:27.454 { 00:12:27.454 "name": "nvmf_tgt_poll_group_000", 00:12:27.454 "admin_qpairs": 0, 00:12:27.454 "io_qpairs": 0, 00:12:27.454 "current_admin_qpairs": 0, 00:12:27.454 "current_io_qpairs": 0, 00:12:27.454 "pending_bdev_io": 0, 00:12:27.454 "completed_nvme_io": 0, 00:12:27.454 "transports": [ 00:12:27.454 { 00:12:27.454 "trtype": "TCP" 00:12:27.454 } 00:12:27.454 ] 00:12:27.454 }, 00:12:27.454 { 00:12:27.454 "name": "nvmf_tgt_poll_group_001", 00:12:27.454 "admin_qpairs": 0, 00:12:27.454 "io_qpairs": 0, 00:12:27.454 "current_admin_qpairs": 0, 00:12:27.454 "current_io_qpairs": 0, 00:12:27.454 "pending_bdev_io": 0, 00:12:27.454 "completed_nvme_io": 0, 00:12:27.454 "transports": [ 00:12:27.454 { 00:12:27.454 "trtype": "TCP" 00:12:27.454 } 00:12:27.454 ] 00:12:27.454 }, 00:12:27.454 { 00:12:27.454 "name": "nvmf_tgt_poll_group_002", 00:12:27.454 "admin_qpairs": 0, 00:12:27.454 "io_qpairs": 0, 00:12:27.454 "current_admin_qpairs": 0, 00:12:27.454 "current_io_qpairs": 0, 00:12:27.454 "pending_bdev_io": 0, 00:12:27.454 "completed_nvme_io": 0, 00:12:27.454 "transports": [ 00:12:27.454 { 00:12:27.454 "trtype": "TCP" 00:12:27.454 } 00:12:27.454 ] 00:12:27.454 }, 00:12:27.454 { 00:12:27.454 "name": "nvmf_tgt_poll_group_003", 00:12:27.454 "admin_qpairs": 0, 00:12:27.454 "io_qpairs": 0, 00:12:27.454 "current_admin_qpairs": 0, 00:12:27.454 "current_io_qpairs": 0, 00:12:27.454 "pending_bdev_io": 0, 00:12:27.454 "completed_nvme_io": 0, 00:12:27.454 "transports": [ 00:12:27.454 { 00:12:27.454 "trtype": "TCP" 00:12:27.454 } 00:12:27.454 ] 00:12:27.454 } 00:12:27.454 ] 00:12:27.454 }' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.454 Malloc1 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.454 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.455 [2024-07-25 18:45:39.253468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:27.455 [2024-07-25 18:45:39.275919] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:27.455 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:27.455 could not add new controller: failed to write to nvme-fabrics device 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.455 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.021 18:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.021 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:28.021 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.021 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:28.021 18:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:30.555 18:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.555 [2024-07-25 18:45:41.999464] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:30.555 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:30.555 could not add new controller: failed to write to nvme-fabrics device 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.555 18:45:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.123 18:45:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.123 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:31.123 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.123 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:31.123 18:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:33.030 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:33.030 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:33.030 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.030 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.031 [2024-07-25 18:45:44.818637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.031 18:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.600 18:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.600 18:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:33.600 18:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.600 18:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:33.600 18:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.134 [2024-07-25 18:45:47.584772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.134 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.135 18:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.394 18:45:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.394 18:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:36.394 18:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.394 18:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:36.394 18:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 [2024-07-25 18:45:50.363188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.923 18:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.489 18:45:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.489 18:45:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:39.489 18:45:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.489 18:45:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:39.489 18:45:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.392 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.393 [2024-07-25 18:45:53.173501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.393 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.964 18:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.964 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:41.964 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.964 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:41.964 18:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.536 [2024-07-25 18:45:55.944128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.536 18:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.795 18:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.795 18:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:44.795 18:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.795 18:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:44.795 18:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 [2024-07-25 18:45:58.772368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.332 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 [2024-07-25 18:45:58.820438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 [2024-07-25 18:45:58.868597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 [2024-07-25 18:45:58.916762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 [2024-07-25 18:45:58.964930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:47.333 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.333 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.333 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.333 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:47.333 "tick_rate": 2700000000, 00:12:47.333 "poll_groups": [ 00:12:47.333 { 00:12:47.333 "name": "nvmf_tgt_poll_group_000", 00:12:47.333 "admin_qpairs": 2, 00:12:47.333 "io_qpairs": 84, 00:12:47.333 "current_admin_qpairs": 0, 00:12:47.333 "current_io_qpairs": 0, 00:12:47.333 "pending_bdev_io": 0, 00:12:47.333 "completed_nvme_io": 183, 00:12:47.333 "transports": [ 00:12:47.333 { 00:12:47.333 "trtype": "TCP" 00:12:47.333 } 00:12:47.333 ] 00:12:47.333 }, 00:12:47.333 { 00:12:47.333 "name": "nvmf_tgt_poll_group_001", 00:12:47.333 "admin_qpairs": 2, 00:12:47.333 "io_qpairs": 84, 00:12:47.333 "current_admin_qpairs": 0, 00:12:47.333 "current_io_qpairs": 0, 00:12:47.333 "pending_bdev_io": 0, 00:12:47.333 "completed_nvme_io": 186, 00:12:47.333 "transports": [ 00:12:47.333 { 00:12:47.333 "trtype": "TCP" 00:12:47.333 } 00:12:47.333 ] 00:12:47.333 }, 00:12:47.333 { 00:12:47.333 "name": "nvmf_tgt_poll_group_002", 00:12:47.333 "admin_qpairs": 1, 00:12:47.333 "io_qpairs": 84, 00:12:47.333 "current_admin_qpairs": 0, 00:12:47.333 "current_io_qpairs": 0, 00:12:47.333 "pending_bdev_io": 0, 00:12:47.334 "completed_nvme_io": 134, 00:12:47.334 "transports": [ 00:12:47.334 { 00:12:47.334 "trtype": "TCP" 00:12:47.334 } 00:12:47.334 ] 00:12:47.334 }, 00:12:47.334 { 00:12:47.334 "name": "nvmf_tgt_poll_group_003", 00:12:47.334 "admin_qpairs": 2, 00:12:47.334 "io_qpairs": 84, 00:12:47.334 "current_admin_qpairs": 0, 00:12:47.334 "current_io_qpairs": 0, 00:12:47.334 "pending_bdev_io": 0, 00:12:47.334 "completed_nvme_io": 183, 00:12:47.334 "transports": [ 00:12:47.334 { 00:12:47.334 "trtype": "TCP" 00:12:47.334 } 00:12:47.334 ] 00:12:47.334 } 00:12:47.334 ] 00:12:47.334 }' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.334 rmmod nvme_tcp 00:12:47.334 rmmod nvme_fabrics 00:12:47.334 rmmod nvme_keyring 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3465467 ']' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3465467 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3465467 ']' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3465467 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3465467 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3465467' 00:12:47.334 killing process with pid 3465467 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3465467 00:12:47.334 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3465467 00:12:47.594 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.594 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.594 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.854 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.854 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.854 18:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.854 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.854 18:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.763 18:46:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.763 00:12:49.763 real 0m25.067s 00:12:49.763 user 1m21.439s 00:12:49.763 sys 0m4.036s 00:12:49.763 18:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.763 18:46:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.763 ************************************ 00:12:49.763 END TEST nvmf_rpc 00:12:49.763 ************************************ 00:12:49.763 18:46:01 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:49.763 18:46:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:49.763 18:46:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.763 18:46:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.763 ************************************ 00:12:49.763 START TEST nvmf_invalid 00:12:49.763 ************************************ 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:49.763 * Looking for test storage... 00:12:49.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.763 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.764 18:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.022 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:50.022 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:50.022 18:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:50.022 18:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.923 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.923 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.923 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.923 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.923 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:12:51.924 00:12:51.924 --- 10.0.0.2 ping statistics --- 00:12:51.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.924 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:12:51.924 00:12:51.924 --- 10.0.0.1 ping statistics --- 00:12:51.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.924 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.924 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3469954 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3469954 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3469954 ']' 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.181 18:46:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.181 [2024-07-25 18:46:03.862685] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:52.181 [2024-07-25 18:46:03.862779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.181 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.181 [2024-07-25 18:46:03.941283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.181 [2024-07-25 18:46:04.040640] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.181 [2024-07-25 18:46:04.040690] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.181 [2024-07-25 18:46:04.040718] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.181 [2024-07-25 18:46:04.040730] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.181 [2024-07-25 18:46:04.040740] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.181 [2024-07-25 18:46:04.040830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.181 [2024-07-25 18:46:04.040870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.181 [2024-07-25 18:46:04.040953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.181 [2024-07-25 18:46:04.040955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.438 18:46:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:52.438 18:46:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:52.438 18:46:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.438 18:46:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.438 18:46:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.438 18:46:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.438 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.438 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16548 00:12:52.695 [2024-07-25 18:46:04.396206] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:52.695 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:52.695 { 00:12:52.695 "nqn": "nqn.2016-06.io.spdk:cnode16548", 00:12:52.695 "tgt_name": "foobar", 00:12:52.695 "method": "nvmf_create_subsystem", 00:12:52.695 "req_id": 1 00:12:52.695 } 00:12:52.695 Got JSON-RPC error response 00:12:52.695 response: 00:12:52.695 { 00:12:52.695 "code": -32603, 00:12:52.695 "message": "Unable to find target foobar" 00:12:52.695 }' 00:12:52.695 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:52.695 { 00:12:52.695 "nqn": "nqn.2016-06.io.spdk:cnode16548", 00:12:52.695 "tgt_name": "foobar", 00:12:52.695 "method": "nvmf_create_subsystem", 00:12:52.695 "req_id": 1 00:12:52.695 } 00:12:52.695 Got JSON-RPC error response 00:12:52.695 response: 00:12:52.695 { 00:12:52.695 "code": -32603, 00:12:52.695 "message": "Unable to find target foobar" 00:12:52.695 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:52.695 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:52.695 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20637 00:12:52.952 [2024-07-25 18:46:04.649113] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20637: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:52.952 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:52.952 { 00:12:52.952 "nqn": "nqn.2016-06.io.spdk:cnode20637", 00:12:52.952 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:52.952 "method": "nvmf_create_subsystem", 00:12:52.952 "req_id": 1 00:12:52.952 } 00:12:52.952 Got JSON-RPC error response 00:12:52.952 response: 00:12:52.952 { 00:12:52.952 "code": -32602, 00:12:52.952 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:52.952 }' 00:12:52.952 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:52.952 { 00:12:52.952 "nqn": "nqn.2016-06.io.spdk:cnode20637", 00:12:52.952 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:52.952 "method": "nvmf_create_subsystem", 00:12:52.952 "req_id": 1 00:12:52.952 } 00:12:52.952 Got JSON-RPC error response 00:12:52.952 response: 00:12:52.952 { 00:12:52.952 "code": -32602, 00:12:52.952 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:52.952 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:52.952 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:52.952 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20302 00:12:53.210 [2024-07-25 18:46:04.905870] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20302: invalid model number 'SPDK_Controller' 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:53.210 { 00:12:53.210 "nqn": "nqn.2016-06.io.spdk:cnode20302", 00:12:53.210 "model_number": "SPDK_Controller\u001f", 00:12:53.210 "method": "nvmf_create_subsystem", 00:12:53.210 "req_id": 1 00:12:53.210 } 00:12:53.210 Got JSON-RPC error response 00:12:53.210 response: 00:12:53.210 { 00:12:53.210 "code": -32602, 00:12:53.210 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.210 }' 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:53.210 { 00:12:53.210 "nqn": "nqn.2016-06.io.spdk:cnode20302", 00:12:53.210 "model_number": "SPDK_Controller\u001f", 00:12:53.210 "method": "nvmf_create_subsystem", 00:12:53.210 "req_id": 1 00:12:53.210 } 00:12:53.210 Got JSON-RPC error response 00:12:53.210 response: 00:12:53.210 { 00:12:53.210 "code": -32602, 00:12:53.210 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.210 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.210 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'RBnrQD_Usd}kF$DGbPa.' 00:12:53.211 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'RBnrQD_Usd}kF$DGbPa.' nqn.2016-06.io.spdk:cnode17329 00:12:53.470 [2024-07-25 18:46:05.226992] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17329: invalid serial number 'RBnrQD_Usd}kF$DGbPa.' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:53.470 { 00:12:53.470 "nqn": "nqn.2016-06.io.spdk:cnode17329", 00:12:53.470 "serial_number": "RBnrQD_U\u007fsd}kF$DGbPa.", 00:12:53.470 "method": "nvmf_create_subsystem", 00:12:53.470 "req_id": 1 00:12:53.470 } 00:12:53.470 Got JSON-RPC error response 00:12:53.470 response: 00:12:53.470 { 00:12:53.470 "code": -32602, 00:12:53.470 "message": "Invalid SN RBnrQD_U\u007fsd}kF$DGbPa." 00:12:53.470 }' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:53.470 { 00:12:53.470 "nqn": "nqn.2016-06.io.spdk:cnode17329", 00:12:53.470 "serial_number": "RBnrQD_U\u007fsd}kF$DGbPa.", 00:12:53.470 "method": "nvmf_create_subsystem", 00:12:53.470 "req_id": 1 00:12:53.470 } 00:12:53.470 Got JSON-RPC error response 00:12:53.470 response: 00:12:53.470 { 00:12:53.470 "code": -32602, 00:12:53.470 "message": "Invalid SN RBnrQD_U\u007fsd}kF$DGbPa." 00:12:53.470 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:53.470 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:53.471 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '(xPEW )c7q -s#]kZdhsF"' 00:12:53.729 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '(xPEW )c7q -s#]kZdhsF"' nqn.2016-06.io.spdk:cnode11201 00:12:53.987 [2024-07-25 18:46:05.628320] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11201: invalid model number '(xPEW )c7q -s#]kZdhsF"' 00:12:53.987 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:53.987 { 00:12:53.987 "nqn": "nqn.2016-06.io.spdk:cnode11201", 00:12:53.987 "model_number": "(xPEW )c7q -s#]kZdhsF\"", 00:12:53.987 "method": "nvmf_create_subsystem", 00:12:53.987 "req_id": 1 00:12:53.987 } 00:12:53.987 Got JSON-RPC error response 00:12:53.987 response: 00:12:53.987 { 00:12:53.987 "code": -32602, 00:12:53.987 "message": "Invalid MN (xPEW )c7q -s#]kZdhsF\"" 00:12:53.987 }' 00:12:53.987 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:53.987 { 00:12:53.987 "nqn": "nqn.2016-06.io.spdk:cnode11201", 00:12:53.987 "model_number": "(xPEW )c7q -s#]kZdhsF\"", 00:12:53.987 "method": "nvmf_create_subsystem", 00:12:53.987 "req_id": 1 00:12:53.987 } 00:12:53.987 Got JSON-RPC error response 00:12:53.987 response: 00:12:53.987 { 00:12:53.987 "code": -32602, 00:12:53.987 "message": "Invalid MN (xPEW )c7q -s#]kZdhsF\"" 00:12:53.987 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:53.987 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:54.244 [2024-07-25 18:46:05.889257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.244 18:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:54.501 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:54.501 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:54.501 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:54.501 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:54.501 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:54.759 [2024-07-25 18:46:06.382940] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:54.759 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:54.759 { 00:12:54.759 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:54.759 "listen_address": { 00:12:54.759 "trtype": "tcp", 00:12:54.759 "traddr": "", 00:12:54.759 "trsvcid": "4421" 00:12:54.759 }, 00:12:54.759 "method": "nvmf_subsystem_remove_listener", 00:12:54.759 "req_id": 1 00:12:54.760 } 00:12:54.760 Got JSON-RPC error response 00:12:54.760 response: 00:12:54.760 { 00:12:54.760 "code": -32602, 00:12:54.760 "message": "Invalid parameters" 00:12:54.760 }' 00:12:54.760 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:54.760 { 00:12:54.760 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:54.760 "listen_address": { 00:12:54.760 "trtype": "tcp", 00:12:54.760 "traddr": "", 00:12:54.760 "trsvcid": "4421" 00:12:54.760 }, 00:12:54.760 "method": "nvmf_subsystem_remove_listener", 00:12:54.760 "req_id": 1 00:12:54.760 } 00:12:54.760 Got JSON-RPC error response 00:12:54.760 response: 00:12:54.760 { 00:12:54.760 "code": -32602, 00:12:54.760 "message": "Invalid parameters" 00:12:54.760 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:54.760 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4243 -i 0 00:12:54.760 [2024-07-25 18:46:06.627726] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4243: invalid cntlid range [0-65519] 00:12:55.017 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:55.017 { 00:12:55.017 "nqn": "nqn.2016-06.io.spdk:cnode4243", 00:12:55.017 "min_cntlid": 0, 00:12:55.017 "method": "nvmf_create_subsystem", 00:12:55.017 "req_id": 1 00:12:55.017 } 00:12:55.017 Got JSON-RPC error response 00:12:55.017 response: 00:12:55.017 { 00:12:55.017 "code": -32602, 00:12:55.017 "message": "Invalid cntlid range [0-65519]" 00:12:55.017 }' 00:12:55.017 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:55.017 { 00:12:55.017 "nqn": "nqn.2016-06.io.spdk:cnode4243", 00:12:55.017 "min_cntlid": 0, 00:12:55.017 "method": "nvmf_create_subsystem", 00:12:55.017 "req_id": 1 00:12:55.017 } 00:12:55.017 Got JSON-RPC error response 00:12:55.017 response: 00:12:55.017 { 00:12:55.017 "code": -32602, 00:12:55.017 "message": "Invalid cntlid range [0-65519]" 00:12:55.017 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.017 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2262 -i 65520 00:12:55.017 [2024-07-25 18:46:06.884577] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2262: invalid cntlid range [65520-65519] 00:12:55.276 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:55.276 { 00:12:55.276 "nqn": "nqn.2016-06.io.spdk:cnode2262", 00:12:55.276 "min_cntlid": 65520, 00:12:55.276 "method": "nvmf_create_subsystem", 00:12:55.276 "req_id": 1 00:12:55.276 } 00:12:55.276 Got JSON-RPC error response 00:12:55.276 response: 00:12:55.276 { 00:12:55.276 "code": -32602, 00:12:55.276 "message": "Invalid cntlid range [65520-65519]" 00:12:55.276 }' 00:12:55.276 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:55.276 { 00:12:55.276 "nqn": "nqn.2016-06.io.spdk:cnode2262", 00:12:55.276 "min_cntlid": 65520, 00:12:55.276 "method": "nvmf_create_subsystem", 00:12:55.276 "req_id": 1 00:12:55.276 } 00:12:55.276 Got JSON-RPC error response 00:12:55.276 response: 00:12:55.276 { 00:12:55.276 "code": -32602, 00:12:55.276 "message": "Invalid cntlid range [65520-65519]" 00:12:55.276 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.276 18:46:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8179 -I 0 00:12:55.276 [2024-07-25 18:46:07.137458] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8179: invalid cntlid range [1-0] 00:12:55.533 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:55.533 { 00:12:55.533 "nqn": "nqn.2016-06.io.spdk:cnode8179", 00:12:55.533 "max_cntlid": 0, 00:12:55.533 "method": "nvmf_create_subsystem", 00:12:55.533 "req_id": 1 00:12:55.533 } 00:12:55.533 Got JSON-RPC error response 00:12:55.533 response: 00:12:55.533 { 00:12:55.533 "code": -32602, 00:12:55.533 "message": "Invalid cntlid range [1-0]" 00:12:55.533 }' 00:12:55.533 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:55.533 { 00:12:55.533 "nqn": "nqn.2016-06.io.spdk:cnode8179", 00:12:55.533 "max_cntlid": 0, 00:12:55.533 "method": "nvmf_create_subsystem", 00:12:55.533 "req_id": 1 00:12:55.533 } 00:12:55.533 Got JSON-RPC error response 00:12:55.533 response: 00:12:55.533 { 00:12:55.533 "code": -32602, 00:12:55.533 "message": "Invalid cntlid range [1-0]" 00:12:55.533 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.533 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4405 -I 65520 00:12:55.533 [2024-07-25 18:46:07.382202] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4405: invalid cntlid range [1-65520] 00:12:55.533 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:55.533 { 00:12:55.533 "nqn": "nqn.2016-06.io.spdk:cnode4405", 00:12:55.533 "max_cntlid": 65520, 00:12:55.533 "method": "nvmf_create_subsystem", 00:12:55.533 "req_id": 1 00:12:55.533 } 00:12:55.533 Got JSON-RPC error response 00:12:55.533 response: 00:12:55.533 { 00:12:55.533 "code": -32602, 00:12:55.533 "message": "Invalid cntlid range [1-65520]" 00:12:55.533 }' 00:12:55.533 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:55.533 { 00:12:55.533 "nqn": "nqn.2016-06.io.spdk:cnode4405", 00:12:55.533 "max_cntlid": 65520, 00:12:55.533 "method": "nvmf_create_subsystem", 00:12:55.533 "req_id": 1 00:12:55.533 } 00:12:55.533 Got JSON-RPC error response 00:12:55.533 response: 00:12:55.533 { 00:12:55.533 "code": -32602, 00:12:55.533 "message": "Invalid cntlid range [1-65520]" 00:12:55.533 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.533 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20792 -i 6 -I 5 00:12:55.790 [2024-07-25 18:46:07.623002] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20792: invalid cntlid range [6-5] 00:12:55.790 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:55.790 { 00:12:55.790 "nqn": "nqn.2016-06.io.spdk:cnode20792", 00:12:55.790 "min_cntlid": 6, 00:12:55.790 "max_cntlid": 5, 00:12:55.790 "method": "nvmf_create_subsystem", 00:12:55.790 "req_id": 1 00:12:55.790 } 00:12:55.790 Got JSON-RPC error response 00:12:55.790 response: 00:12:55.790 { 00:12:55.790 "code": -32602, 00:12:55.790 "message": "Invalid cntlid range [6-5]" 00:12:55.790 }' 00:12:55.790 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:55.791 { 00:12:55.791 "nqn": "nqn.2016-06.io.spdk:cnode20792", 00:12:55.791 "min_cntlid": 6, 00:12:55.791 "max_cntlid": 5, 00:12:55.791 "method": "nvmf_create_subsystem", 00:12:55.791 "req_id": 1 00:12:55.791 } 00:12:55.791 Got JSON-RPC error response 00:12:55.791 response: 00:12:55.791 { 00:12:55.791 "code": -32602, 00:12:55.791 "message": "Invalid cntlid range [6-5]" 00:12:55.791 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.791 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:56.050 { 00:12:56.050 "name": "foobar", 00:12:56.050 "method": "nvmf_delete_target", 00:12:56.050 "req_id": 1 00:12:56.050 } 00:12:56.050 Got JSON-RPC error response 00:12:56.050 response: 00:12:56.050 { 00:12:56.050 "code": -32602, 00:12:56.050 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:56.050 }' 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:56.050 { 00:12:56.050 "name": "foobar", 00:12:56.050 "method": "nvmf_delete_target", 00:12:56.050 "req_id": 1 00:12:56.050 } 00:12:56.050 Got JSON-RPC error response 00:12:56.050 response: 00:12:56.050 { 00:12:56.050 "code": -32602, 00:12:56.050 "message": "The specified target doesn't exist, cannot delete it." 00:12:56.050 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.050 rmmod nvme_tcp 00:12:56.050 rmmod nvme_fabrics 00:12:56.050 rmmod nvme_keyring 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3469954 ']' 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3469954 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3469954 ']' 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3469954 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3469954 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3469954' 00:12:56.050 killing process with pid 3469954 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3469954 00:12:56.050 18:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3469954 00:12:56.309 18:46:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:56.309 18:46:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:56.309 18:46:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:56.309 18:46:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.309 18:46:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.309 18:46:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.309 18:46:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.309 18:46:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.840 18:46:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:58.840 00:12:58.840 real 0m8.547s 00:12:58.840 user 0m19.655s 00:12:58.840 sys 0m2.468s 00:12:58.840 18:46:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:58.840 18:46:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:58.840 ************************************ 00:12:58.840 END TEST nvmf_invalid 00:12:58.840 ************************************ 00:12:58.840 18:46:10 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:58.840 18:46:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:58.840 18:46:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.840 18:46:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.840 ************************************ 00:12:58.840 START TEST nvmf_abort 00:12:58.840 ************************************ 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:58.840 * Looking for test storage... 00:12:58.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:58.840 18:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:00.746 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:00.746 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.746 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:00.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:00.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:13:00.747 00:13:00.747 --- 10.0.0.2 ping statistics --- 00:13:00.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.747 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:13:00.747 00:13:00.747 --- 10.0.0.1 ping statistics --- 00:13:00.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.747 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3472468 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3472468 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3472468 ']' 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:00.747 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:00.747 [2024-07-25 18:46:12.449449] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:00.747 [2024-07-25 18:46:12.449520] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.747 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.747 [2024-07-25 18:46:12.520186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.747 [2024-07-25 18:46:12.620439] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.747 [2024-07-25 18:46:12.620498] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.747 [2024-07-25 18:46:12.620514] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.747 [2024-07-25 18:46:12.620527] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.747 [2024-07-25 18:46:12.620538] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.747 [2024-07-25 18:46:12.620655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.747 [2024-07-25 18:46:12.620695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.747 [2024-07-25 18:46:12.620698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 [2024-07-25 18:46:12.761998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 Malloc0 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 Delay0 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 [2024-07-25 18:46:12.842901] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.005 18:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:01.005 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.264 [2024-07-25 18:46:12.907573] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:03.172 Initializing NVMe Controllers 00:13:03.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:03.172 controller IO queue size 128 less than required 00:13:03.172 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:03.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:03.172 Initialization complete. Launching workers. 00:13:03.172 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33154 00:13:03.172 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33215, failed to submit 62 00:13:03.172 success 33158, unsuccess 57, failed 0 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.172 18:46:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.172 rmmod nvme_tcp 00:13:03.172 rmmod nvme_fabrics 00:13:03.172 rmmod nvme_keyring 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3472468 ']' 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3472468 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3472468 ']' 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3472468 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3472468 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3472468' 00:13:03.432 killing process with pid 3472468 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3472468 00:13:03.432 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3472468 00:13:03.724 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:03.724 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:03.724 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:03.725 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.725 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.725 18:46:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.725 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.725 18:46:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.636 18:46:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:05.636 00:13:05.636 real 0m7.199s 00:13:05.636 user 0m10.155s 00:13:05.636 sys 0m2.568s 00:13:05.636 18:46:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.636 18:46:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.636 ************************************ 00:13:05.636 END TEST nvmf_abort 00:13:05.636 ************************************ 00:13:05.636 18:46:17 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:05.636 18:46:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:05.636 18:46:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.636 18:46:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:05.636 ************************************ 00:13:05.636 START TEST nvmf_ns_hotplug_stress 00:13:05.636 ************************************ 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:05.636 * Looking for test storage... 00:13:05.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.636 18:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.171 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:08.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:08.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:08.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:08.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:13:08.172 00:13:08.172 --- 10.0.0.2 ping statistics --- 00:13:08.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.172 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:13:08.172 00:13:08.172 --- 10.0.0.1 ping statistics --- 00:13:08.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.172 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3474811 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3474811 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3474811 ']' 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:08.172 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.173 [2024-07-25 18:46:19.653198] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:08.173 [2024-07-25 18:46:19.653283] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.173 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.173 [2024-07-25 18:46:19.714905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.173 [2024-07-25 18:46:19.804204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.173 [2024-07-25 18:46:19.804267] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.173 [2024-07-25 18:46:19.804283] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.173 [2024-07-25 18:46:19.804296] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.173 [2024-07-25 18:46:19.804308] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.173 [2024-07-25 18:46:19.804403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.173 [2024-07-25 18:46:19.804502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.173 [2024-07-25 18:46:19.804505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:08.173 18:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:08.431 [2024-07-25 18:46:20.219827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.431 18:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:08.689 18:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.947 [2024-07-25 18:46:20.778637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.947 18:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:09.205 18:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:09.463 Malloc0 00:13:09.463 18:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:09.721 Delay0 00:13:09.979 18:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.238 18:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:10.238 NULL1 00:13:10.238 18:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:10.496 18:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3475106 00:13:10.496 18:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:10.496 18:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.496 18:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:10.754 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.754 18:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.011 18:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:11.011 18:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:11.270 true 00:13:11.270 18:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:11.270 18:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.836 18:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.836 18:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:11.836 18:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:12.093 true 00:13:12.093 18:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:12.093 18:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.350 18:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.607 18:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:12.607 18:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:12.864 true 00:13:12.864 18:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:12.864 18:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.239 Read completed with error (sct=0, sc=11) 00:13:14.239 18:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.239 18:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:14.239 18:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:14.497 true 00:13:14.497 18:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:14.497 18:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.428 18:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.686 18:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:15.686 18:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:15.943 true 00:13:15.943 18:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:15.943 18:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.200 18:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.457 18:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:16.457 18:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:16.457 true 00:13:16.457 18:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:16.457 18:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.714 18:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.972 18:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:16.972 18:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:17.229 true 00:13:17.229 18:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:17.229 18:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.602 18:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.602 18:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:18.602 18:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:18.860 true 00:13:18.860 18:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:18.860 18:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.117 18:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.374 18:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:19.374 18:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:19.630 true 00:13:19.630 18:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:19.630 18:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.562 18:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.819 18:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:20.819 18:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:21.076 true 00:13:21.076 18:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:21.076 18:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.333 18:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.591 18:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:21.591 18:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:21.848 true 00:13:21.848 18:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:21.848 18:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.412 18:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.690 18:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:22.690 18:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:22.690 true 00:13:22.690 18:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:22.690 18:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.069 18:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.069 18:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:24.069 18:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:24.326 true 00:13:24.326 18:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:24.326 18:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.583 18:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.840 18:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:24.840 18:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:25.098 true 00:13:25.098 18:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:25.098 18:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.356 18:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.614 18:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:25.614 18:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:25.871 true 00:13:25.871 18:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:25.871 18:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.804 18:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.061 18:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:27.061 18:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:27.319 true 00:13:27.319 18:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:27.319 18:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.577 18:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.834 18:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:27.834 18:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:28.090 true 00:13:28.090 18:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:28.090 18:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.346 18:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.602 18:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:28.602 18:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:28.859 true 00:13:28.859 18:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:28.859 18:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.789 18:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.046 18:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:30.046 18:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:30.303 true 00:13:30.303 18:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:30.304 18:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.237 18:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.494 18:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:31.494 18:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:31.750 true 00:13:31.750 18:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:31.750 18:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.007 18:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.263 18:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:32.263 18:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:32.263 true 00:13:32.520 18:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:32.520 18:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.452 18:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.709 18:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:33.709 18:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:33.966 true 00:13:33.966 18:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:33.966 18:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.223 18:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.223 18:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:34.223 18:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:34.481 true 00:13:34.481 18:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:34.481 18:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.414 18:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.979 18:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:35.979 18:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:35.979 true 00:13:35.979 18:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:35.979 18:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.236 18:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.494 18:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:36.494 18:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:36.751 true 00:13:36.751 18:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:36.751 18:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.009 18:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.266 18:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:37.266 18:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:37.522 true 00:13:37.523 18:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:37.523 18:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.900 18:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.900 18:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:38.900 18:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:39.157 true 00:13:39.157 18:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:39.157 18:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.414 18:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.672 18:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:39.672 18:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:39.929 true 00:13:39.929 18:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:39.929 18:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.186 18:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.444 18:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:40.444 18:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:40.701 true 00:13:40.701 18:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:40.701 18:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.635 Initializing NVMe Controllers 00:13:41.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.635 Controller IO queue size 128, less than required. 00:13:41.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:41.635 Controller IO queue size 128, less than required. 00:13:41.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:41.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:41.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:41.635 Initialization complete. Launching workers. 00:13:41.635 ======================================================== 00:13:41.635 Latency(us) 00:13:41.635 Device Information : IOPS MiB/s Average min max 00:13:41.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 759.02 0.37 75779.74 2906.21 1013798.44 00:13:41.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9077.77 4.43 14059.32 3228.36 446582.04 00:13:41.635 ======================================================== 00:13:41.635 Total : 9836.80 4.80 18821.77 2906.21 1013798.44 00:13:41.635 00:13:41.635 18:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.893 18:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:41.893 18:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:42.150 true 00:13:42.150 18:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3475106 00:13:42.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3475106) - No such process 00:13:42.150 18:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3475106 00:13:42.150 18:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.407 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.665 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:42.665 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:42.665 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:42.665 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:42.665 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:42.922 null0 00:13:42.922 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:42.922 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:42.922 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:43.180 null1 00:13:43.180 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:43.180 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:43.180 18:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:43.438 null2 00:13:43.438 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:43.438 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:43.438 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:43.695 null3 00:13:43.695 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:43.695 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:43.695 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:43.953 null4 00:13:43.953 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:43.953 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:43.953 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:44.210 null5 00:13:44.210 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:44.210 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:44.210 18:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:44.468 null6 00:13:44.468 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:44.468 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:44.468 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:44.726 null7 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:44.726 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3479275 3479276 3479278 3479280 3479282 3479284 3479286 3479288 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.727 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.984 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.984 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.984 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.984 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.984 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.984 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.984 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.984 18:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.242 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:45.500 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.500 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.500 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.500 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:45.500 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:45.500 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.500 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:45.500 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:45.758 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.758 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.758 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:45.758 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.759 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:46.017 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:46.017 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.017 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.017 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.017 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:46.017 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:46.017 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:46.017 18:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.276 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:46.533 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:46.533 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.533 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.533 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.533 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:46.533 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:46.533 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:46.791 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:46.791 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.791 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.791 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.048 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:47.306 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:47.306 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:47.306 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.306 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.306 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:47.307 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:47.307 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.307 18:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.564 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.565 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:47.822 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.822 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:47.822 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:47.822 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:47.822 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.822 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.822 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:47.822 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.080 18:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:48.339 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.339 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.339 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:48.339 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.339 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.339 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.339 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.339 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.597 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.855 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.855 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.855 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.855 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.855 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:48.855 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.855 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.855 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.112 18:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.370 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.370 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:49.370 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.370 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.370 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:49.370 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.370 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.370 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.628 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.886 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:49.886 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.886 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.886 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:49.886 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.886 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.886 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.886 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.143 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:50.144 18:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:50.144 rmmod nvme_tcp 00:13:50.144 rmmod nvme_fabrics 00:13:50.144 rmmod nvme_keyring 00:13:50.144 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3474811 ']' 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3474811 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3474811 ']' 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3474811 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3474811 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3474811' 00:13:50.401 killing process with pid 3474811 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3474811 00:13:50.401 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3474811 00:13:50.658 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:50.658 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:50.658 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:50.658 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.658 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.658 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.658 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.658 18:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.571 18:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:52.571 00:13:52.571 real 0m46.929s 00:13:52.571 user 3m34.445s 00:13:52.571 sys 0m16.284s 00:13:52.571 18:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:52.571 18:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.571 ************************************ 00:13:52.571 END TEST nvmf_ns_hotplug_stress 00:13:52.571 ************************************ 00:13:52.571 18:47:04 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:52.572 18:47:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:52.572 18:47:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:52.572 18:47:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:52.572 ************************************ 00:13:52.572 START TEST nvmf_connect_stress 00:13:52.572 ************************************ 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:52.572 * Looking for test storage... 00:13:52.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.572 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.830 18:47:04 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:52.831 18:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:54.735 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:54.735 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:54.735 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:54.735 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.735 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:13:54.736 00:13:54.736 --- 10.0.0.2 ping statistics --- 00:13:54.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.736 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:54.736 00:13:54.736 --- 10.0.0.1 ping statistics --- 00:13:54.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.736 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3482039 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3482039 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3482039 ']' 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:54.736 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.028 [2024-07-25 18:47:06.622051] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:55.028 [2024-07-25 18:47:06.622136] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.028 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.028 [2024-07-25 18:47:06.684373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.028 [2024-07-25 18:47:06.766666] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.028 [2024-07-25 18:47:06.766721] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.028 [2024-07-25 18:47:06.766748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.028 [2024-07-25 18:47:06.766759] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.028 [2024-07-25 18:47:06.766769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.028 [2024-07-25 18:47:06.766854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.028 [2024-07-25 18:47:06.766919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.028 [2024-07-25 18:47:06.766922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.028 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:55.028 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:55.028 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.028 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.028 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.289 18:47:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.289 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.290 [2024-07-25 18:47:06.902885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.290 [2024-07-25 18:47:06.932205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.290 NULL1 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3482065 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.290 18:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.547 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.547 18:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:55.547 18:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.547 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.547 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.804 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.804 18:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:55.804 18:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.804 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.804 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.368 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.368 18:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:56.368 18:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.368 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.368 18:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.625 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.625 18:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:56.625 18:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.625 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.625 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.882 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.882 18:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:56.882 18:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.882 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.882 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.138 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.138 18:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:57.138 18:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.138 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.138 18:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.395 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.395 18:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:57.395 18:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.395 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.395 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.958 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.958 18:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:57.958 18:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.958 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.958 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.215 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.215 18:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:58.215 18:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.215 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.215 18:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.475 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.475 18:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:58.475 18:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.475 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.475 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.734 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.734 18:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:58.734 18:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.734 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.734 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.991 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.991 18:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:58.991 18:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.991 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.991 18:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.559 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.559 18:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:59.559 18:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.559 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.559 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.818 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.818 18:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:13:59.818 18:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.818 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.818 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.076 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.076 18:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:00.076 18:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.076 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.076 18:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.334 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.334 18:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:00.334 18:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.334 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.334 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.593 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.593 18:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:00.593 18:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.593 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.594 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.163 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.163 18:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:01.163 18:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.163 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.163 18:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.422 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.422 18:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:01.422 18:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.422 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.422 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.692 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.692 18:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:01.692 18:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.692 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.692 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.956 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.956 18:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:01.956 18:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.956 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.956 18:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.215 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.215 18:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:02.215 18:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.215 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.215 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.782 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.782 18:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:02.782 18:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.782 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.782 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.040 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.040 18:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:03.040 18:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.040 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.041 18:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.298 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.298 18:47:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:03.298 18:47:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.298 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.298 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.556 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.556 18:47:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:03.556 18:47:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.556 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.556 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.816 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.816 18:47:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:03.816 18:47:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.816 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.816 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.384 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.384 18:47:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:04.384 18:47:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.384 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.384 18:47:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.642 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.642 18:47:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:04.642 18:47:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.642 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.642 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.900 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.900 18:47:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:04.900 18:47:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.900 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.900 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.158 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.158 18:47:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:05.159 18:47:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.159 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.159 18:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.159 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3482065 00:14:05.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3482065) - No such process 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3482065 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.417 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.417 rmmod nvme_tcp 00:14:05.417 rmmod nvme_fabrics 00:14:05.676 rmmod nvme_keyring 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3482039 ']' 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3482039 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3482039 ']' 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3482039 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3482039 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3482039' 00:14:05.676 killing process with pid 3482039 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3482039 00:14:05.676 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3482039 00:14:05.934 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.934 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.934 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.934 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.934 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.934 18:47:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.934 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.934 18:47:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.842 18:47:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.842 00:14:07.842 real 0m15.223s 00:14:07.842 user 0m38.232s 00:14:07.842 sys 0m5.874s 00:14:07.842 18:47:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.842 18:47:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.842 ************************************ 00:14:07.842 END TEST nvmf_connect_stress 00:14:07.842 ************************************ 00:14:07.842 18:47:19 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:07.842 18:47:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:07.842 18:47:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.842 18:47:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.842 ************************************ 00:14:07.842 START TEST nvmf_fused_ordering 00:14:07.842 ************************************ 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:07.842 * Looking for test storage... 00:14:07.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.842 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:08.099 18:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:09.997 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:09.997 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:09.997 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:09.997 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:14:09.997 00:14:09.997 --- 10.0.0.2 ping statistics --- 00:14:09.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.997 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:14:09.997 00:14:09.997 --- 10.0.0.1 ping statistics --- 00:14:09.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.997 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.997 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3485210 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3485210 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3485210 ']' 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.998 18:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.998 [2024-07-25 18:47:21.862784] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:09.998 [2024-07-25 18:47:21.862861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.255 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.255 [2024-07-25 18:47:21.925695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.255 [2024-07-25 18:47:22.008541] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.255 [2024-07-25 18:47:22.008607] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.255 [2024-07-25 18:47:22.008621] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.255 [2024-07-25 18:47:22.008632] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.255 [2024-07-25 18:47:22.008656] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.255 [2024-07-25 18:47:22.008685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.255 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.255 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:10.255 18:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.255 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.255 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.515 [2024-07-25 18:47:22.152453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.515 [2024-07-25 18:47:22.168681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.515 NULL1 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.515 18:47:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:10.515 [2024-07-25 18:47:22.212840] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:10.515 [2024-07-25 18:47:22.212889] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485231 ] 00:14:10.515 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.084 Attached to nqn.2016-06.io.spdk:cnode1 00:14:11.084 Namespace ID: 1 size: 1GB 00:14:11.084 fused_ordering(0) 00:14:11.084 fused_ordering(1) 00:14:11.084 fused_ordering(2) 00:14:11.084 fused_ordering(3) 00:14:11.084 fused_ordering(4) 00:14:11.084 fused_ordering(5) 00:14:11.084 fused_ordering(6) 00:14:11.084 fused_ordering(7) 00:14:11.084 fused_ordering(8) 00:14:11.084 fused_ordering(9) 00:14:11.084 fused_ordering(10) 00:14:11.084 fused_ordering(11) 00:14:11.084 fused_ordering(12) 00:14:11.084 fused_ordering(13) 00:14:11.084 fused_ordering(14) 00:14:11.084 fused_ordering(15) 00:14:11.084 fused_ordering(16) 00:14:11.084 fused_ordering(17) 00:14:11.084 fused_ordering(18) 00:14:11.084 fused_ordering(19) 00:14:11.084 fused_ordering(20) 00:14:11.084 fused_ordering(21) 00:14:11.084 fused_ordering(22) 00:14:11.084 fused_ordering(23) 00:14:11.084 fused_ordering(24) 00:14:11.084 fused_ordering(25) 00:14:11.084 fused_ordering(26) 00:14:11.084 fused_ordering(27) 00:14:11.084 fused_ordering(28) 00:14:11.084 fused_ordering(29) 00:14:11.084 fused_ordering(30) 00:14:11.084 fused_ordering(31) 00:14:11.084 fused_ordering(32) 00:14:11.084 fused_ordering(33) 00:14:11.084 fused_ordering(34) 00:14:11.084 fused_ordering(35) 00:14:11.084 fused_ordering(36) 00:14:11.084 fused_ordering(37) 00:14:11.084 fused_ordering(38) 00:14:11.084 fused_ordering(39) 00:14:11.084 fused_ordering(40) 00:14:11.084 fused_ordering(41) 00:14:11.084 fused_ordering(42) 00:14:11.084 fused_ordering(43) 00:14:11.084 fused_ordering(44) 00:14:11.084 fused_ordering(45) 00:14:11.084 fused_ordering(46) 00:14:11.084 fused_ordering(47) 00:14:11.084 fused_ordering(48) 00:14:11.084 fused_ordering(49) 00:14:11.084 fused_ordering(50) 00:14:11.085 fused_ordering(51) 00:14:11.085 fused_ordering(52) 00:14:11.085 fused_ordering(53) 00:14:11.085 fused_ordering(54) 00:14:11.085 fused_ordering(55) 00:14:11.085 fused_ordering(56) 00:14:11.085 fused_ordering(57) 00:14:11.085 fused_ordering(58) 00:14:11.085 fused_ordering(59) 00:14:11.085 fused_ordering(60) 00:14:11.085 fused_ordering(61) 00:14:11.085 fused_ordering(62) 00:14:11.085 fused_ordering(63) 00:14:11.085 fused_ordering(64) 00:14:11.085 fused_ordering(65) 00:14:11.085 fused_ordering(66) 00:14:11.085 fused_ordering(67) 00:14:11.085 fused_ordering(68) 00:14:11.085 fused_ordering(69) 00:14:11.085 fused_ordering(70) 00:14:11.085 fused_ordering(71) 00:14:11.085 fused_ordering(72) 00:14:11.085 fused_ordering(73) 00:14:11.085 fused_ordering(74) 00:14:11.085 fused_ordering(75) 00:14:11.085 fused_ordering(76) 00:14:11.085 fused_ordering(77) 00:14:11.085 fused_ordering(78) 00:14:11.085 fused_ordering(79) 00:14:11.085 fused_ordering(80) 00:14:11.085 fused_ordering(81) 00:14:11.085 fused_ordering(82) 00:14:11.085 fused_ordering(83) 00:14:11.085 fused_ordering(84) 00:14:11.085 fused_ordering(85) 00:14:11.085 fused_ordering(86) 00:14:11.085 fused_ordering(87) 00:14:11.085 fused_ordering(88) 00:14:11.085 fused_ordering(89) 00:14:11.085 fused_ordering(90) 00:14:11.085 fused_ordering(91) 00:14:11.085 fused_ordering(92) 00:14:11.085 fused_ordering(93) 00:14:11.085 fused_ordering(94) 00:14:11.085 fused_ordering(95) 00:14:11.085 fused_ordering(96) 00:14:11.085 fused_ordering(97) 00:14:11.085 fused_ordering(98) 00:14:11.085 fused_ordering(99) 00:14:11.085 fused_ordering(100) 00:14:11.085 fused_ordering(101) 00:14:11.085 fused_ordering(102) 00:14:11.085 fused_ordering(103) 00:14:11.085 fused_ordering(104) 00:14:11.085 fused_ordering(105) 00:14:11.085 fused_ordering(106) 00:14:11.085 fused_ordering(107) 00:14:11.085 fused_ordering(108) 00:14:11.085 fused_ordering(109) 00:14:11.085 fused_ordering(110) 00:14:11.085 fused_ordering(111) 00:14:11.085 fused_ordering(112) 00:14:11.085 fused_ordering(113) 00:14:11.085 fused_ordering(114) 00:14:11.085 fused_ordering(115) 00:14:11.085 fused_ordering(116) 00:14:11.085 fused_ordering(117) 00:14:11.085 fused_ordering(118) 00:14:11.085 fused_ordering(119) 00:14:11.085 fused_ordering(120) 00:14:11.085 fused_ordering(121) 00:14:11.085 fused_ordering(122) 00:14:11.085 fused_ordering(123) 00:14:11.085 fused_ordering(124) 00:14:11.085 fused_ordering(125) 00:14:11.085 fused_ordering(126) 00:14:11.085 fused_ordering(127) 00:14:11.085 fused_ordering(128) 00:14:11.085 fused_ordering(129) 00:14:11.085 fused_ordering(130) 00:14:11.085 fused_ordering(131) 00:14:11.085 fused_ordering(132) 00:14:11.085 fused_ordering(133) 00:14:11.085 fused_ordering(134) 00:14:11.085 fused_ordering(135) 00:14:11.085 fused_ordering(136) 00:14:11.085 fused_ordering(137) 00:14:11.085 fused_ordering(138) 00:14:11.085 fused_ordering(139) 00:14:11.085 fused_ordering(140) 00:14:11.085 fused_ordering(141) 00:14:11.085 fused_ordering(142) 00:14:11.085 fused_ordering(143) 00:14:11.085 fused_ordering(144) 00:14:11.085 fused_ordering(145) 00:14:11.085 fused_ordering(146) 00:14:11.085 fused_ordering(147) 00:14:11.085 fused_ordering(148) 00:14:11.085 fused_ordering(149) 00:14:11.085 fused_ordering(150) 00:14:11.085 fused_ordering(151) 00:14:11.085 fused_ordering(152) 00:14:11.085 fused_ordering(153) 00:14:11.085 fused_ordering(154) 00:14:11.085 fused_ordering(155) 00:14:11.085 fused_ordering(156) 00:14:11.085 fused_ordering(157) 00:14:11.085 fused_ordering(158) 00:14:11.085 fused_ordering(159) 00:14:11.085 fused_ordering(160) 00:14:11.085 fused_ordering(161) 00:14:11.085 fused_ordering(162) 00:14:11.085 fused_ordering(163) 00:14:11.085 fused_ordering(164) 00:14:11.085 fused_ordering(165) 00:14:11.085 fused_ordering(166) 00:14:11.085 fused_ordering(167) 00:14:11.085 fused_ordering(168) 00:14:11.085 fused_ordering(169) 00:14:11.085 fused_ordering(170) 00:14:11.085 fused_ordering(171) 00:14:11.085 fused_ordering(172) 00:14:11.085 fused_ordering(173) 00:14:11.085 fused_ordering(174) 00:14:11.085 fused_ordering(175) 00:14:11.085 fused_ordering(176) 00:14:11.085 fused_ordering(177) 00:14:11.085 fused_ordering(178) 00:14:11.085 fused_ordering(179) 00:14:11.085 fused_ordering(180) 00:14:11.085 fused_ordering(181) 00:14:11.085 fused_ordering(182) 00:14:11.085 fused_ordering(183) 00:14:11.085 fused_ordering(184) 00:14:11.085 fused_ordering(185) 00:14:11.085 fused_ordering(186) 00:14:11.085 fused_ordering(187) 00:14:11.085 fused_ordering(188) 00:14:11.085 fused_ordering(189) 00:14:11.085 fused_ordering(190) 00:14:11.085 fused_ordering(191) 00:14:11.085 fused_ordering(192) 00:14:11.085 fused_ordering(193) 00:14:11.085 fused_ordering(194) 00:14:11.085 fused_ordering(195) 00:14:11.085 fused_ordering(196) 00:14:11.085 fused_ordering(197) 00:14:11.085 fused_ordering(198) 00:14:11.085 fused_ordering(199) 00:14:11.085 fused_ordering(200) 00:14:11.085 fused_ordering(201) 00:14:11.085 fused_ordering(202) 00:14:11.085 fused_ordering(203) 00:14:11.085 fused_ordering(204) 00:14:11.085 fused_ordering(205) 00:14:11.657 fused_ordering(206) 00:14:11.657 fused_ordering(207) 00:14:11.657 fused_ordering(208) 00:14:11.657 fused_ordering(209) 00:14:11.657 fused_ordering(210) 00:14:11.657 fused_ordering(211) 00:14:11.657 fused_ordering(212) 00:14:11.657 fused_ordering(213) 00:14:11.657 fused_ordering(214) 00:14:11.657 fused_ordering(215) 00:14:11.657 fused_ordering(216) 00:14:11.657 fused_ordering(217) 00:14:11.657 fused_ordering(218) 00:14:11.657 fused_ordering(219) 00:14:11.657 fused_ordering(220) 00:14:11.657 fused_ordering(221) 00:14:11.657 fused_ordering(222) 00:14:11.657 fused_ordering(223) 00:14:11.657 fused_ordering(224) 00:14:11.657 fused_ordering(225) 00:14:11.657 fused_ordering(226) 00:14:11.657 fused_ordering(227) 00:14:11.657 fused_ordering(228) 00:14:11.657 fused_ordering(229) 00:14:11.657 fused_ordering(230) 00:14:11.657 fused_ordering(231) 00:14:11.657 fused_ordering(232) 00:14:11.657 fused_ordering(233) 00:14:11.657 fused_ordering(234) 00:14:11.657 fused_ordering(235) 00:14:11.657 fused_ordering(236) 00:14:11.657 fused_ordering(237) 00:14:11.657 fused_ordering(238) 00:14:11.657 fused_ordering(239) 00:14:11.657 fused_ordering(240) 00:14:11.657 fused_ordering(241) 00:14:11.657 fused_ordering(242) 00:14:11.657 fused_ordering(243) 00:14:11.657 fused_ordering(244) 00:14:11.658 fused_ordering(245) 00:14:11.658 fused_ordering(246) 00:14:11.658 fused_ordering(247) 00:14:11.658 fused_ordering(248) 00:14:11.658 fused_ordering(249) 00:14:11.658 fused_ordering(250) 00:14:11.658 fused_ordering(251) 00:14:11.658 fused_ordering(252) 00:14:11.658 fused_ordering(253) 00:14:11.658 fused_ordering(254) 00:14:11.658 fused_ordering(255) 00:14:11.658 fused_ordering(256) 00:14:11.658 fused_ordering(257) 00:14:11.658 fused_ordering(258) 00:14:11.658 fused_ordering(259) 00:14:11.658 fused_ordering(260) 00:14:11.658 fused_ordering(261) 00:14:11.658 fused_ordering(262) 00:14:11.658 fused_ordering(263) 00:14:11.658 fused_ordering(264) 00:14:11.658 fused_ordering(265) 00:14:11.658 fused_ordering(266) 00:14:11.658 fused_ordering(267) 00:14:11.658 fused_ordering(268) 00:14:11.658 fused_ordering(269) 00:14:11.658 fused_ordering(270) 00:14:11.658 fused_ordering(271) 00:14:11.658 fused_ordering(272) 00:14:11.658 fused_ordering(273) 00:14:11.658 fused_ordering(274) 00:14:11.658 fused_ordering(275) 00:14:11.658 fused_ordering(276) 00:14:11.658 fused_ordering(277) 00:14:11.658 fused_ordering(278) 00:14:11.658 fused_ordering(279) 00:14:11.658 fused_ordering(280) 00:14:11.658 fused_ordering(281) 00:14:11.658 fused_ordering(282) 00:14:11.658 fused_ordering(283) 00:14:11.658 fused_ordering(284) 00:14:11.658 fused_ordering(285) 00:14:11.658 fused_ordering(286) 00:14:11.658 fused_ordering(287) 00:14:11.658 fused_ordering(288) 00:14:11.658 fused_ordering(289) 00:14:11.658 fused_ordering(290) 00:14:11.658 fused_ordering(291) 00:14:11.658 fused_ordering(292) 00:14:11.658 fused_ordering(293) 00:14:11.658 fused_ordering(294) 00:14:11.658 fused_ordering(295) 00:14:11.658 fused_ordering(296) 00:14:11.658 fused_ordering(297) 00:14:11.658 fused_ordering(298) 00:14:11.658 fused_ordering(299) 00:14:11.658 fused_ordering(300) 00:14:11.658 fused_ordering(301) 00:14:11.658 fused_ordering(302) 00:14:11.658 fused_ordering(303) 00:14:11.658 fused_ordering(304) 00:14:11.658 fused_ordering(305) 00:14:11.658 fused_ordering(306) 00:14:11.658 fused_ordering(307) 00:14:11.658 fused_ordering(308) 00:14:11.658 fused_ordering(309) 00:14:11.658 fused_ordering(310) 00:14:11.658 fused_ordering(311) 00:14:11.658 fused_ordering(312) 00:14:11.658 fused_ordering(313) 00:14:11.658 fused_ordering(314) 00:14:11.658 fused_ordering(315) 00:14:11.658 fused_ordering(316) 00:14:11.658 fused_ordering(317) 00:14:11.658 fused_ordering(318) 00:14:11.658 fused_ordering(319) 00:14:11.658 fused_ordering(320) 00:14:11.658 fused_ordering(321) 00:14:11.658 fused_ordering(322) 00:14:11.658 fused_ordering(323) 00:14:11.658 fused_ordering(324) 00:14:11.658 fused_ordering(325) 00:14:11.658 fused_ordering(326) 00:14:11.658 fused_ordering(327) 00:14:11.658 fused_ordering(328) 00:14:11.658 fused_ordering(329) 00:14:11.658 fused_ordering(330) 00:14:11.658 fused_ordering(331) 00:14:11.658 fused_ordering(332) 00:14:11.658 fused_ordering(333) 00:14:11.658 fused_ordering(334) 00:14:11.658 fused_ordering(335) 00:14:11.658 fused_ordering(336) 00:14:11.658 fused_ordering(337) 00:14:11.658 fused_ordering(338) 00:14:11.658 fused_ordering(339) 00:14:11.658 fused_ordering(340) 00:14:11.658 fused_ordering(341) 00:14:11.658 fused_ordering(342) 00:14:11.658 fused_ordering(343) 00:14:11.658 fused_ordering(344) 00:14:11.658 fused_ordering(345) 00:14:11.658 fused_ordering(346) 00:14:11.658 fused_ordering(347) 00:14:11.658 fused_ordering(348) 00:14:11.658 fused_ordering(349) 00:14:11.658 fused_ordering(350) 00:14:11.658 fused_ordering(351) 00:14:11.658 fused_ordering(352) 00:14:11.658 fused_ordering(353) 00:14:11.658 fused_ordering(354) 00:14:11.658 fused_ordering(355) 00:14:11.658 fused_ordering(356) 00:14:11.658 fused_ordering(357) 00:14:11.658 fused_ordering(358) 00:14:11.658 fused_ordering(359) 00:14:11.658 fused_ordering(360) 00:14:11.658 fused_ordering(361) 00:14:11.658 fused_ordering(362) 00:14:11.658 fused_ordering(363) 00:14:11.658 fused_ordering(364) 00:14:11.658 fused_ordering(365) 00:14:11.658 fused_ordering(366) 00:14:11.658 fused_ordering(367) 00:14:11.658 fused_ordering(368) 00:14:11.658 fused_ordering(369) 00:14:11.658 fused_ordering(370) 00:14:11.658 fused_ordering(371) 00:14:11.658 fused_ordering(372) 00:14:11.658 fused_ordering(373) 00:14:11.658 fused_ordering(374) 00:14:11.658 fused_ordering(375) 00:14:11.658 fused_ordering(376) 00:14:11.658 fused_ordering(377) 00:14:11.658 fused_ordering(378) 00:14:11.658 fused_ordering(379) 00:14:11.658 fused_ordering(380) 00:14:11.658 fused_ordering(381) 00:14:11.658 fused_ordering(382) 00:14:11.658 fused_ordering(383) 00:14:11.658 fused_ordering(384) 00:14:11.658 fused_ordering(385) 00:14:11.658 fused_ordering(386) 00:14:11.658 fused_ordering(387) 00:14:11.658 fused_ordering(388) 00:14:11.658 fused_ordering(389) 00:14:11.658 fused_ordering(390) 00:14:11.658 fused_ordering(391) 00:14:11.658 fused_ordering(392) 00:14:11.658 fused_ordering(393) 00:14:11.658 fused_ordering(394) 00:14:11.658 fused_ordering(395) 00:14:11.658 fused_ordering(396) 00:14:11.658 fused_ordering(397) 00:14:11.658 fused_ordering(398) 00:14:11.658 fused_ordering(399) 00:14:11.658 fused_ordering(400) 00:14:11.658 fused_ordering(401) 00:14:11.658 fused_ordering(402) 00:14:11.658 fused_ordering(403) 00:14:11.658 fused_ordering(404) 00:14:11.658 fused_ordering(405) 00:14:11.658 fused_ordering(406) 00:14:11.658 fused_ordering(407) 00:14:11.658 fused_ordering(408) 00:14:11.658 fused_ordering(409) 00:14:11.658 fused_ordering(410) 00:14:11.918 fused_ordering(411) 00:14:11.918 fused_ordering(412) 00:14:11.918 fused_ordering(413) 00:14:11.918 fused_ordering(414) 00:14:11.919 fused_ordering(415) 00:14:11.919 fused_ordering(416) 00:14:11.919 fused_ordering(417) 00:14:11.919 fused_ordering(418) 00:14:11.919 fused_ordering(419) 00:14:11.919 fused_ordering(420) 00:14:11.919 fused_ordering(421) 00:14:11.919 fused_ordering(422) 00:14:11.919 fused_ordering(423) 00:14:11.919 fused_ordering(424) 00:14:11.919 fused_ordering(425) 00:14:11.919 fused_ordering(426) 00:14:11.919 fused_ordering(427) 00:14:11.919 fused_ordering(428) 00:14:11.919 fused_ordering(429) 00:14:11.919 fused_ordering(430) 00:14:11.919 fused_ordering(431) 00:14:11.919 fused_ordering(432) 00:14:11.919 fused_ordering(433) 00:14:11.919 fused_ordering(434) 00:14:11.919 fused_ordering(435) 00:14:11.919 fused_ordering(436) 00:14:11.919 fused_ordering(437) 00:14:11.919 fused_ordering(438) 00:14:11.919 fused_ordering(439) 00:14:11.919 fused_ordering(440) 00:14:11.919 fused_ordering(441) 00:14:11.919 fused_ordering(442) 00:14:11.919 fused_ordering(443) 00:14:11.919 fused_ordering(444) 00:14:11.919 fused_ordering(445) 00:14:11.919 fused_ordering(446) 00:14:11.919 fused_ordering(447) 00:14:11.919 fused_ordering(448) 00:14:11.919 fused_ordering(449) 00:14:11.919 fused_ordering(450) 00:14:11.919 fused_ordering(451) 00:14:11.919 fused_ordering(452) 00:14:11.919 fused_ordering(453) 00:14:11.919 fused_ordering(454) 00:14:11.919 fused_ordering(455) 00:14:11.919 fused_ordering(456) 00:14:11.919 fused_ordering(457) 00:14:11.919 fused_ordering(458) 00:14:11.919 fused_ordering(459) 00:14:11.919 fused_ordering(460) 00:14:11.919 fused_ordering(461) 00:14:11.919 fused_ordering(462) 00:14:11.919 fused_ordering(463) 00:14:11.919 fused_ordering(464) 00:14:11.919 fused_ordering(465) 00:14:11.919 fused_ordering(466) 00:14:11.919 fused_ordering(467) 00:14:11.919 fused_ordering(468) 00:14:11.919 fused_ordering(469) 00:14:11.919 fused_ordering(470) 00:14:11.919 fused_ordering(471) 00:14:11.919 fused_ordering(472) 00:14:11.919 fused_ordering(473) 00:14:11.919 fused_ordering(474) 00:14:11.919 fused_ordering(475) 00:14:11.919 fused_ordering(476) 00:14:11.919 fused_ordering(477) 00:14:11.919 fused_ordering(478) 00:14:11.919 fused_ordering(479) 00:14:11.919 fused_ordering(480) 00:14:11.919 fused_ordering(481) 00:14:11.919 fused_ordering(482) 00:14:11.919 fused_ordering(483) 00:14:11.919 fused_ordering(484) 00:14:11.919 fused_ordering(485) 00:14:11.919 fused_ordering(486) 00:14:11.919 fused_ordering(487) 00:14:11.919 fused_ordering(488) 00:14:11.919 fused_ordering(489) 00:14:11.919 fused_ordering(490) 00:14:11.919 fused_ordering(491) 00:14:11.919 fused_ordering(492) 00:14:11.919 fused_ordering(493) 00:14:11.919 fused_ordering(494) 00:14:11.919 fused_ordering(495) 00:14:11.919 fused_ordering(496) 00:14:11.919 fused_ordering(497) 00:14:11.919 fused_ordering(498) 00:14:11.919 fused_ordering(499) 00:14:11.919 fused_ordering(500) 00:14:11.919 fused_ordering(501) 00:14:11.919 fused_ordering(502) 00:14:11.919 fused_ordering(503) 00:14:11.919 fused_ordering(504) 00:14:11.919 fused_ordering(505) 00:14:11.919 fused_ordering(506) 00:14:11.919 fused_ordering(507) 00:14:11.919 fused_ordering(508) 00:14:11.919 fused_ordering(509) 00:14:11.919 fused_ordering(510) 00:14:11.919 fused_ordering(511) 00:14:11.919 fused_ordering(512) 00:14:11.919 fused_ordering(513) 00:14:11.919 fused_ordering(514) 00:14:11.919 fused_ordering(515) 00:14:11.919 fused_ordering(516) 00:14:11.919 fused_ordering(517) 00:14:11.919 fused_ordering(518) 00:14:11.919 fused_ordering(519) 00:14:11.919 fused_ordering(520) 00:14:11.919 fused_ordering(521) 00:14:11.919 fused_ordering(522) 00:14:11.919 fused_ordering(523) 00:14:11.919 fused_ordering(524) 00:14:11.919 fused_ordering(525) 00:14:11.919 fused_ordering(526) 00:14:11.919 fused_ordering(527) 00:14:11.919 fused_ordering(528) 00:14:11.919 fused_ordering(529) 00:14:11.919 fused_ordering(530) 00:14:11.919 fused_ordering(531) 00:14:11.919 fused_ordering(532) 00:14:11.919 fused_ordering(533) 00:14:11.919 fused_ordering(534) 00:14:11.919 fused_ordering(535) 00:14:11.919 fused_ordering(536) 00:14:11.919 fused_ordering(537) 00:14:11.919 fused_ordering(538) 00:14:11.919 fused_ordering(539) 00:14:11.919 fused_ordering(540) 00:14:11.919 fused_ordering(541) 00:14:11.919 fused_ordering(542) 00:14:11.919 fused_ordering(543) 00:14:11.919 fused_ordering(544) 00:14:11.919 fused_ordering(545) 00:14:11.919 fused_ordering(546) 00:14:11.919 fused_ordering(547) 00:14:11.919 fused_ordering(548) 00:14:11.919 fused_ordering(549) 00:14:11.919 fused_ordering(550) 00:14:11.919 fused_ordering(551) 00:14:11.919 fused_ordering(552) 00:14:11.919 fused_ordering(553) 00:14:11.919 fused_ordering(554) 00:14:11.919 fused_ordering(555) 00:14:11.919 fused_ordering(556) 00:14:11.919 fused_ordering(557) 00:14:11.919 fused_ordering(558) 00:14:11.919 fused_ordering(559) 00:14:11.919 fused_ordering(560) 00:14:11.919 fused_ordering(561) 00:14:11.919 fused_ordering(562) 00:14:11.919 fused_ordering(563) 00:14:11.919 fused_ordering(564) 00:14:11.919 fused_ordering(565) 00:14:11.919 fused_ordering(566) 00:14:11.919 fused_ordering(567) 00:14:11.919 fused_ordering(568) 00:14:11.919 fused_ordering(569) 00:14:11.919 fused_ordering(570) 00:14:11.919 fused_ordering(571) 00:14:11.919 fused_ordering(572) 00:14:11.919 fused_ordering(573) 00:14:11.919 fused_ordering(574) 00:14:11.919 fused_ordering(575) 00:14:11.919 fused_ordering(576) 00:14:11.919 fused_ordering(577) 00:14:11.919 fused_ordering(578) 00:14:11.919 fused_ordering(579) 00:14:11.919 fused_ordering(580) 00:14:11.919 fused_ordering(581) 00:14:11.919 fused_ordering(582) 00:14:11.919 fused_ordering(583) 00:14:11.919 fused_ordering(584) 00:14:11.919 fused_ordering(585) 00:14:11.919 fused_ordering(586) 00:14:11.919 fused_ordering(587) 00:14:11.919 fused_ordering(588) 00:14:11.919 fused_ordering(589) 00:14:11.919 fused_ordering(590) 00:14:11.919 fused_ordering(591) 00:14:11.919 fused_ordering(592) 00:14:11.919 fused_ordering(593) 00:14:11.919 fused_ordering(594) 00:14:11.919 fused_ordering(595) 00:14:11.919 fused_ordering(596) 00:14:11.919 fused_ordering(597) 00:14:11.919 fused_ordering(598) 00:14:11.919 fused_ordering(599) 00:14:11.919 fused_ordering(600) 00:14:11.919 fused_ordering(601) 00:14:11.919 fused_ordering(602) 00:14:11.919 fused_ordering(603) 00:14:11.919 fused_ordering(604) 00:14:11.919 fused_ordering(605) 00:14:11.919 fused_ordering(606) 00:14:11.919 fused_ordering(607) 00:14:11.919 fused_ordering(608) 00:14:11.919 fused_ordering(609) 00:14:11.919 fused_ordering(610) 00:14:11.919 fused_ordering(611) 00:14:11.919 fused_ordering(612) 00:14:11.919 fused_ordering(613) 00:14:11.919 fused_ordering(614) 00:14:11.919 fused_ordering(615) 00:14:12.488 fused_ordering(616) 00:14:12.488 fused_ordering(617) 00:14:12.488 fused_ordering(618) 00:14:12.488 fused_ordering(619) 00:14:12.488 fused_ordering(620) 00:14:12.488 fused_ordering(621) 00:14:12.488 fused_ordering(622) 00:14:12.488 fused_ordering(623) 00:14:12.488 fused_ordering(624) 00:14:12.488 fused_ordering(625) 00:14:12.488 fused_ordering(626) 00:14:12.488 fused_ordering(627) 00:14:12.488 fused_ordering(628) 00:14:12.488 fused_ordering(629) 00:14:12.488 fused_ordering(630) 00:14:12.488 fused_ordering(631) 00:14:12.488 fused_ordering(632) 00:14:12.488 fused_ordering(633) 00:14:12.488 fused_ordering(634) 00:14:12.488 fused_ordering(635) 00:14:12.488 fused_ordering(636) 00:14:12.488 fused_ordering(637) 00:14:12.488 fused_ordering(638) 00:14:12.488 fused_ordering(639) 00:14:12.488 fused_ordering(640) 00:14:12.488 fused_ordering(641) 00:14:12.488 fused_ordering(642) 00:14:12.488 fused_ordering(643) 00:14:12.488 fused_ordering(644) 00:14:12.488 fused_ordering(645) 00:14:12.488 fused_ordering(646) 00:14:12.488 fused_ordering(647) 00:14:12.488 fused_ordering(648) 00:14:12.488 fused_ordering(649) 00:14:12.488 fused_ordering(650) 00:14:12.488 fused_ordering(651) 00:14:12.488 fused_ordering(652) 00:14:12.488 fused_ordering(653) 00:14:12.488 fused_ordering(654) 00:14:12.488 fused_ordering(655) 00:14:12.488 fused_ordering(656) 00:14:12.488 fused_ordering(657) 00:14:12.488 fused_ordering(658) 00:14:12.488 fused_ordering(659) 00:14:12.488 fused_ordering(660) 00:14:12.488 fused_ordering(661) 00:14:12.488 fused_ordering(662) 00:14:12.488 fused_ordering(663) 00:14:12.488 fused_ordering(664) 00:14:12.488 fused_ordering(665) 00:14:12.488 fused_ordering(666) 00:14:12.488 fused_ordering(667) 00:14:12.488 fused_ordering(668) 00:14:12.488 fused_ordering(669) 00:14:12.488 fused_ordering(670) 00:14:12.488 fused_ordering(671) 00:14:12.488 fused_ordering(672) 00:14:12.488 fused_ordering(673) 00:14:12.488 fused_ordering(674) 00:14:12.488 fused_ordering(675) 00:14:12.488 fused_ordering(676) 00:14:12.488 fused_ordering(677) 00:14:12.488 fused_ordering(678) 00:14:12.488 fused_ordering(679) 00:14:12.488 fused_ordering(680) 00:14:12.488 fused_ordering(681) 00:14:12.488 fused_ordering(682) 00:14:12.488 fused_ordering(683) 00:14:12.488 fused_ordering(684) 00:14:12.488 fused_ordering(685) 00:14:12.488 fused_ordering(686) 00:14:12.488 fused_ordering(687) 00:14:12.488 fused_ordering(688) 00:14:12.488 fused_ordering(689) 00:14:12.488 fused_ordering(690) 00:14:12.489 fused_ordering(691) 00:14:12.489 fused_ordering(692) 00:14:12.489 fused_ordering(693) 00:14:12.489 fused_ordering(694) 00:14:12.489 fused_ordering(695) 00:14:12.489 fused_ordering(696) 00:14:12.489 fused_ordering(697) 00:14:12.489 fused_ordering(698) 00:14:12.489 fused_ordering(699) 00:14:12.489 fused_ordering(700) 00:14:12.489 fused_ordering(701) 00:14:12.489 fused_ordering(702) 00:14:12.489 fused_ordering(703) 00:14:12.489 fused_ordering(704) 00:14:12.489 fused_ordering(705) 00:14:12.489 fused_ordering(706) 00:14:12.489 fused_ordering(707) 00:14:12.489 fused_ordering(708) 00:14:12.489 fused_ordering(709) 00:14:12.489 fused_ordering(710) 00:14:12.489 fused_ordering(711) 00:14:12.489 fused_ordering(712) 00:14:12.489 fused_ordering(713) 00:14:12.489 fused_ordering(714) 00:14:12.489 fused_ordering(715) 00:14:12.489 fused_ordering(716) 00:14:12.489 fused_ordering(717) 00:14:12.489 fused_ordering(718) 00:14:12.489 fused_ordering(719) 00:14:12.489 fused_ordering(720) 00:14:12.489 fused_ordering(721) 00:14:12.489 fused_ordering(722) 00:14:12.489 fused_ordering(723) 00:14:12.489 fused_ordering(724) 00:14:12.489 fused_ordering(725) 00:14:12.489 fused_ordering(726) 00:14:12.489 fused_ordering(727) 00:14:12.489 fused_ordering(728) 00:14:12.489 fused_ordering(729) 00:14:12.489 fused_ordering(730) 00:14:12.489 fused_ordering(731) 00:14:12.489 fused_ordering(732) 00:14:12.489 fused_ordering(733) 00:14:12.489 fused_ordering(734) 00:14:12.489 fused_ordering(735) 00:14:12.489 fused_ordering(736) 00:14:12.489 fused_ordering(737) 00:14:12.489 fused_ordering(738) 00:14:12.489 fused_ordering(739) 00:14:12.489 fused_ordering(740) 00:14:12.489 fused_ordering(741) 00:14:12.489 fused_ordering(742) 00:14:12.489 fused_ordering(743) 00:14:12.489 fused_ordering(744) 00:14:12.489 fused_ordering(745) 00:14:12.489 fused_ordering(746) 00:14:12.489 fused_ordering(747) 00:14:12.489 fused_ordering(748) 00:14:12.489 fused_ordering(749) 00:14:12.489 fused_ordering(750) 00:14:12.489 fused_ordering(751) 00:14:12.489 fused_ordering(752) 00:14:12.489 fused_ordering(753) 00:14:12.489 fused_ordering(754) 00:14:12.489 fused_ordering(755) 00:14:12.489 fused_ordering(756) 00:14:12.489 fused_ordering(757) 00:14:12.489 fused_ordering(758) 00:14:12.489 fused_ordering(759) 00:14:12.489 fused_ordering(760) 00:14:12.489 fused_ordering(761) 00:14:12.489 fused_ordering(762) 00:14:12.489 fused_ordering(763) 00:14:12.489 fused_ordering(764) 00:14:12.489 fused_ordering(765) 00:14:12.489 fused_ordering(766) 00:14:12.489 fused_ordering(767) 00:14:12.489 fused_ordering(768) 00:14:12.489 fused_ordering(769) 00:14:12.489 fused_ordering(770) 00:14:12.489 fused_ordering(771) 00:14:12.489 fused_ordering(772) 00:14:12.489 fused_ordering(773) 00:14:12.489 fused_ordering(774) 00:14:12.489 fused_ordering(775) 00:14:12.489 fused_ordering(776) 00:14:12.489 fused_ordering(777) 00:14:12.489 fused_ordering(778) 00:14:12.489 fused_ordering(779) 00:14:12.489 fused_ordering(780) 00:14:12.489 fused_ordering(781) 00:14:12.489 fused_ordering(782) 00:14:12.489 fused_ordering(783) 00:14:12.489 fused_ordering(784) 00:14:12.489 fused_ordering(785) 00:14:12.489 fused_ordering(786) 00:14:12.489 fused_ordering(787) 00:14:12.489 fused_ordering(788) 00:14:12.489 fused_ordering(789) 00:14:12.489 fused_ordering(790) 00:14:12.489 fused_ordering(791) 00:14:12.489 fused_ordering(792) 00:14:12.489 fused_ordering(793) 00:14:12.489 fused_ordering(794) 00:14:12.489 fused_ordering(795) 00:14:12.489 fused_ordering(796) 00:14:12.489 fused_ordering(797) 00:14:12.489 fused_ordering(798) 00:14:12.489 fused_ordering(799) 00:14:12.489 fused_ordering(800) 00:14:12.489 fused_ordering(801) 00:14:12.489 fused_ordering(802) 00:14:12.489 fused_ordering(803) 00:14:12.489 fused_ordering(804) 00:14:12.489 fused_ordering(805) 00:14:12.489 fused_ordering(806) 00:14:12.489 fused_ordering(807) 00:14:12.489 fused_ordering(808) 00:14:12.489 fused_ordering(809) 00:14:12.489 fused_ordering(810) 00:14:12.489 fused_ordering(811) 00:14:12.489 fused_ordering(812) 00:14:12.489 fused_ordering(813) 00:14:12.489 fused_ordering(814) 00:14:12.489 fused_ordering(815) 00:14:12.489 fused_ordering(816) 00:14:12.489 fused_ordering(817) 00:14:12.489 fused_ordering(818) 00:14:12.489 fused_ordering(819) 00:14:12.489 fused_ordering(820) 00:14:13.425 fused_ordering(821) 00:14:13.425 fused_ordering(822) 00:14:13.425 fused_ordering(823) 00:14:13.425 fused_ordering(824) 00:14:13.425 fused_ordering(825) 00:14:13.425 fused_ordering(826) 00:14:13.425 fused_ordering(827) 00:14:13.425 fused_ordering(828) 00:14:13.425 fused_ordering(829) 00:14:13.425 fused_ordering(830) 00:14:13.425 fused_ordering(831) 00:14:13.425 fused_ordering(832) 00:14:13.425 fused_ordering(833) 00:14:13.425 fused_ordering(834) 00:14:13.425 fused_ordering(835) 00:14:13.426 fused_ordering(836) 00:14:13.426 fused_ordering(837) 00:14:13.426 fused_ordering(838) 00:14:13.426 fused_ordering(839) 00:14:13.426 fused_ordering(840) 00:14:13.426 fused_ordering(841) 00:14:13.426 fused_ordering(842) 00:14:13.426 fused_ordering(843) 00:14:13.426 fused_ordering(844) 00:14:13.426 fused_ordering(845) 00:14:13.426 fused_ordering(846) 00:14:13.426 fused_ordering(847) 00:14:13.426 fused_ordering(848) 00:14:13.426 fused_ordering(849) 00:14:13.426 fused_ordering(850) 00:14:13.426 fused_ordering(851) 00:14:13.426 fused_ordering(852) 00:14:13.426 fused_ordering(853) 00:14:13.426 fused_ordering(854) 00:14:13.426 fused_ordering(855) 00:14:13.426 fused_ordering(856) 00:14:13.426 fused_ordering(857) 00:14:13.426 fused_ordering(858) 00:14:13.426 fused_ordering(859) 00:14:13.426 fused_ordering(860) 00:14:13.426 fused_ordering(861) 00:14:13.426 fused_ordering(862) 00:14:13.426 fused_ordering(863) 00:14:13.426 fused_ordering(864) 00:14:13.426 fused_ordering(865) 00:14:13.426 fused_ordering(866) 00:14:13.426 fused_ordering(867) 00:14:13.426 fused_ordering(868) 00:14:13.426 fused_ordering(869) 00:14:13.426 fused_ordering(870) 00:14:13.426 fused_ordering(871) 00:14:13.426 fused_ordering(872) 00:14:13.426 fused_ordering(873) 00:14:13.426 fused_ordering(874) 00:14:13.426 fused_ordering(875) 00:14:13.426 fused_ordering(876) 00:14:13.426 fused_ordering(877) 00:14:13.426 fused_ordering(878) 00:14:13.426 fused_ordering(879) 00:14:13.426 fused_ordering(880) 00:14:13.426 fused_ordering(881) 00:14:13.426 fused_ordering(882) 00:14:13.426 fused_ordering(883) 00:14:13.426 fused_ordering(884) 00:14:13.426 fused_ordering(885) 00:14:13.426 fused_ordering(886) 00:14:13.426 fused_ordering(887) 00:14:13.426 fused_ordering(888) 00:14:13.426 fused_ordering(889) 00:14:13.426 fused_ordering(890) 00:14:13.426 fused_ordering(891) 00:14:13.426 fused_ordering(892) 00:14:13.426 fused_ordering(893) 00:14:13.426 fused_ordering(894) 00:14:13.426 fused_ordering(895) 00:14:13.426 fused_ordering(896) 00:14:13.426 fused_ordering(897) 00:14:13.426 fused_ordering(898) 00:14:13.426 fused_ordering(899) 00:14:13.426 fused_ordering(900) 00:14:13.426 fused_ordering(901) 00:14:13.426 fused_ordering(902) 00:14:13.426 fused_ordering(903) 00:14:13.426 fused_ordering(904) 00:14:13.426 fused_ordering(905) 00:14:13.426 fused_ordering(906) 00:14:13.426 fused_ordering(907) 00:14:13.426 fused_ordering(908) 00:14:13.426 fused_ordering(909) 00:14:13.426 fused_ordering(910) 00:14:13.426 fused_ordering(911) 00:14:13.426 fused_ordering(912) 00:14:13.426 fused_ordering(913) 00:14:13.426 fused_ordering(914) 00:14:13.426 fused_ordering(915) 00:14:13.426 fused_ordering(916) 00:14:13.426 fused_ordering(917) 00:14:13.426 fused_ordering(918) 00:14:13.426 fused_ordering(919) 00:14:13.426 fused_ordering(920) 00:14:13.426 fused_ordering(921) 00:14:13.426 fused_ordering(922) 00:14:13.426 fused_ordering(923) 00:14:13.426 fused_ordering(924) 00:14:13.426 fused_ordering(925) 00:14:13.426 fused_ordering(926) 00:14:13.426 fused_ordering(927) 00:14:13.426 fused_ordering(928) 00:14:13.426 fused_ordering(929) 00:14:13.426 fused_ordering(930) 00:14:13.426 fused_ordering(931) 00:14:13.426 fused_ordering(932) 00:14:13.426 fused_ordering(933) 00:14:13.426 fused_ordering(934) 00:14:13.426 fused_ordering(935) 00:14:13.426 fused_ordering(936) 00:14:13.426 fused_ordering(937) 00:14:13.426 fused_ordering(938) 00:14:13.426 fused_ordering(939) 00:14:13.426 fused_ordering(940) 00:14:13.426 fused_ordering(941) 00:14:13.426 fused_ordering(942) 00:14:13.426 fused_ordering(943) 00:14:13.426 fused_ordering(944) 00:14:13.426 fused_ordering(945) 00:14:13.426 fused_ordering(946) 00:14:13.426 fused_ordering(947) 00:14:13.426 fused_ordering(948) 00:14:13.426 fused_ordering(949) 00:14:13.426 fused_ordering(950) 00:14:13.426 fused_ordering(951) 00:14:13.426 fused_ordering(952) 00:14:13.426 fused_ordering(953) 00:14:13.426 fused_ordering(954) 00:14:13.426 fused_ordering(955) 00:14:13.426 fused_ordering(956) 00:14:13.426 fused_ordering(957) 00:14:13.426 fused_ordering(958) 00:14:13.426 fused_ordering(959) 00:14:13.426 fused_ordering(960) 00:14:13.426 fused_ordering(961) 00:14:13.426 fused_ordering(962) 00:14:13.426 fused_ordering(963) 00:14:13.426 fused_ordering(964) 00:14:13.426 fused_ordering(965) 00:14:13.426 fused_ordering(966) 00:14:13.426 fused_ordering(967) 00:14:13.426 fused_ordering(968) 00:14:13.426 fused_ordering(969) 00:14:13.426 fused_ordering(970) 00:14:13.426 fused_ordering(971) 00:14:13.426 fused_ordering(972) 00:14:13.426 fused_ordering(973) 00:14:13.426 fused_ordering(974) 00:14:13.426 fused_ordering(975) 00:14:13.426 fused_ordering(976) 00:14:13.426 fused_ordering(977) 00:14:13.426 fused_ordering(978) 00:14:13.426 fused_ordering(979) 00:14:13.426 fused_ordering(980) 00:14:13.426 fused_ordering(981) 00:14:13.426 fused_ordering(982) 00:14:13.426 fused_ordering(983) 00:14:13.426 fused_ordering(984) 00:14:13.426 fused_ordering(985) 00:14:13.426 fused_ordering(986) 00:14:13.426 fused_ordering(987) 00:14:13.426 fused_ordering(988) 00:14:13.426 fused_ordering(989) 00:14:13.426 fused_ordering(990) 00:14:13.426 fused_ordering(991) 00:14:13.426 fused_ordering(992) 00:14:13.426 fused_ordering(993) 00:14:13.426 fused_ordering(994) 00:14:13.426 fused_ordering(995) 00:14:13.426 fused_ordering(996) 00:14:13.426 fused_ordering(997) 00:14:13.426 fused_ordering(998) 00:14:13.426 fused_ordering(999) 00:14:13.426 fused_ordering(1000) 00:14:13.426 fused_ordering(1001) 00:14:13.426 fused_ordering(1002) 00:14:13.426 fused_ordering(1003) 00:14:13.426 fused_ordering(1004) 00:14:13.426 fused_ordering(1005) 00:14:13.426 fused_ordering(1006) 00:14:13.426 fused_ordering(1007) 00:14:13.426 fused_ordering(1008) 00:14:13.426 fused_ordering(1009) 00:14:13.426 fused_ordering(1010) 00:14:13.426 fused_ordering(1011) 00:14:13.426 fused_ordering(1012) 00:14:13.426 fused_ordering(1013) 00:14:13.426 fused_ordering(1014) 00:14:13.426 fused_ordering(1015) 00:14:13.426 fused_ordering(1016) 00:14:13.426 fused_ordering(1017) 00:14:13.426 fused_ordering(1018) 00:14:13.426 fused_ordering(1019) 00:14:13.426 fused_ordering(1020) 00:14:13.426 fused_ordering(1021) 00:14:13.426 fused_ordering(1022) 00:14:13.426 fused_ordering(1023) 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.426 rmmod nvme_tcp 00:14:13.426 rmmod nvme_fabrics 00:14:13.426 rmmod nvme_keyring 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3485210 ']' 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3485210 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3485210 ']' 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3485210 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3485210 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3485210' 00:14:13.426 killing process with pid 3485210 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3485210 00:14:13.426 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3485210 00:14:13.687 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.687 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.687 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.687 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.687 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.687 18:47:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.687 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.687 18:47:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.596 18:47:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.596 00:14:15.596 real 0m7.721s 00:14:15.596 user 0m4.653s 00:14:15.596 sys 0m3.631s 00:14:15.596 18:47:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:15.596 18:47:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.596 ************************************ 00:14:15.596 END TEST nvmf_fused_ordering 00:14:15.596 ************************************ 00:14:15.596 18:47:27 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:15.596 18:47:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:15.596 18:47:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:15.596 18:47:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.596 ************************************ 00:14:15.596 START TEST nvmf_delete_subsystem 00:14:15.596 ************************************ 00:14:15.596 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:15.854 * Looking for test storage... 00:14:15.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.854 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.855 18:47:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:17.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:17.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:17.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:17.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.758 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:14:17.759 00:14:17.759 --- 10.0.0.2 ping statistics --- 00:14:17.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.759 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:14:17.759 00:14:17.759 --- 10.0.0.1 ping statistics --- 00:14:17.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.759 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3487570 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3487570 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3487570 ']' 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:17.759 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.759 [2024-07-25 18:47:29.632542] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:17.759 [2024-07-25 18:47:29.632624] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.018 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.018 [2024-07-25 18:47:29.698094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:18.018 [2024-07-25 18:47:29.786879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.018 [2024-07-25 18:47:29.786936] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.018 [2024-07-25 18:47:29.786964] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.018 [2024-07-25 18:47:29.786980] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.018 [2024-07-25 18:47:29.786990] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.018 [2024-07-25 18:47:29.787080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.018 [2024-07-25 18:47:29.787086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.276 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:18.276 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:18.276 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.276 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.276 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.276 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.276 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 [2024-07-25 18:47:29.937999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 [2024-07-25 18:47:29.954271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 NULL1 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 Delay0 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3487593 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:18.277 18:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:18.277 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.277 [2024-07-25 18:47:30.028899] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:20.182 18:47:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.182 18:47:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.182 18:47:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 [2024-07-25 18:47:32.111385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1ac00c600 is same with the state(5) to be set 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 starting I/O failed: -6 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Write completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.440 Read completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 starting I/O failed: -6 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 [2024-07-25 18:47:32.112240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025180 is same with the state(5) to be set 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:20.441 Read completed with error (sct=0, sc=8) 00:14:20.441 Write completed with error (sct=0, sc=8) 00:14:21.375 [2024-07-25 18:47:33.084085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10288b0 is same with the state(5) to be set 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 [2024-07-25 18:47:33.110927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025aa0 is same with the state(5) to be set 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 [2024-07-25 18:47:33.111137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025360 is same with the state(5) to be set 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 [2024-07-25 18:47:33.114050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1ac000c00 is same with the state(5) to be set 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Read completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.375 Write completed with error (sct=0, sc=8) 00:14:21.376 Write completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Write completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Write completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Write completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 Read completed with error (sct=0, sc=8) 00:14:21.376 [2024-07-25 18:47:33.114757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe1ac00c2f0 is same with the state(5) to be set 00:14:21.376 Initializing NVMe Controllers 00:14:21.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.376 Controller IO queue size 128, less than required. 00:14:21.376 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:21.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:21.376 Initialization complete. Launching workers. 00:14:21.376 ======================================================== 00:14:21.376 Latency(us) 00:14:21.376 Device Information : IOPS MiB/s Average min max 00:14:21.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.64 0.08 892842.52 431.58 1013529.70 00:14:21.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.19 0.08 907010.37 629.44 1012070.22 00:14:21.376 ======================================================== 00:14:21.376 Total : 336.83 0.16 899790.82 431.58 1013529.70 00:14:21.376 00:14:21.376 [2024-07-25 18:47:33.115200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10288b0 (9): Bad file descriptor 00:14:21.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:21.376 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.376 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:21.376 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3487593 00:14:21.376 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3487593 00:14:21.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3487593) - No such process 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3487593 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3487593 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3487593 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.941 [2024-07-25 18:47:33.635969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3488070 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3488070 00:14:21.941 18:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:21.941 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.941 [2024-07-25 18:47:33.701738] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:22.509 18:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:22.509 18:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3488070 00:14:22.509 18:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:23.071 18:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:23.071 18:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3488070 00:14:23.071 18:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:23.330 18:47:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:23.330 18:47:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3488070 00:14:23.330 18:47:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:23.899 18:47:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:23.899 18:47:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3488070 00:14:23.899 18:47:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:24.467 18:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:24.467 18:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3488070 00:14:24.467 18:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:25.036 18:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:25.036 18:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3488070 00:14:25.036 18:47:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:25.036 Initializing NVMe Controllers 00:14:25.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.036 Controller IO queue size 128, less than required. 00:14:25.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:25.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:25.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:25.036 Initialization complete. Launching workers. 00:14:25.036 ======================================================== 00:14:25.036 Latency(us) 00:14:25.036 Device Information : IOPS MiB/s Average min max 00:14:25.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003357.34 1000179.63 1010454.50 00:14:25.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005324.47 1000215.73 1012997.48 00:14:25.036 ======================================================== 00:14:25.036 Total : 256.00 0.12 1004340.91 1000179.63 1012997.48 00:14:25.036 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3488070 00:14:25.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3488070) - No such process 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3488070 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.296 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.554 rmmod nvme_tcp 00:14:25.554 rmmod nvme_fabrics 00:14:25.554 rmmod nvme_keyring 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3487570 ']' 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3487570 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3487570 ']' 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3487570 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3487570 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3487570' 00:14:25.554 killing process with pid 3487570 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3487570 00:14:25.554 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3487570 00:14:25.814 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.814 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.814 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.814 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.814 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.814 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.814 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.814 18:47:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.715 18:47:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.715 00:14:27.715 real 0m12.105s 00:14:27.715 user 0m27.460s 00:14:27.715 sys 0m2.910s 00:14:27.715 18:47:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:27.715 18:47:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.715 ************************************ 00:14:27.715 END TEST nvmf_delete_subsystem 00:14:27.715 ************************************ 00:14:27.715 18:47:39 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:27.715 18:47:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:27.715 18:47:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:27.715 18:47:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.715 ************************************ 00:14:27.715 START TEST nvmf_ns_masking 00:14:27.715 ************************************ 00:14:27.715 18:47:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:27.973 * Looking for test storage... 00:14:27.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.973 18:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.973 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:27.973 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.973 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.973 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.973 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.973 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=3478183f-7698-4276-9db5-d44fe0aea247 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.974 18:47:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:29.906 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:29.906 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:29.906 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:29.906 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:14:29.906 00:14:29.906 --- 10.0.0.2 ping statistics --- 00:14:29.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.906 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:14:29.906 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:14:29.906 00:14:29.906 --- 10.0.0.1 ping statistics --- 00:14:29.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.906 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3490456 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3490456 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3490456 ']' 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:29.907 18:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:29.907 [2024-07-25 18:47:41.758356] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:29.907 [2024-07-25 18:47:41.758441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.166 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.166 [2024-07-25 18:47:41.827822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.166 [2024-07-25 18:47:41.920297] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.166 [2024-07-25 18:47:41.920364] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.166 [2024-07-25 18:47:41.920389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.166 [2024-07-25 18:47:41.920400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.166 [2024-07-25 18:47:41.920410] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.166 [2024-07-25 18:47:41.920501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.166 [2024-07-25 18:47:41.920559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.166 [2024-07-25 18:47:41.920588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.166 [2024-07-25 18:47:41.920590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.426 18:47:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:30.426 18:47:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:30.426 18:47:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.426 18:47:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.426 18:47:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.426 18:47:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.426 18:47:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:30.426 [2024-07-25 18:47:42.288411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.687 18:47:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:30.687 18:47:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:30.687 18:47:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:30.687 Malloc1 00:14:30.946 18:47:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:31.204 Malloc2 00:14:31.204 18:47:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:31.462 18:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:31.719 18:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.976 [2024-07-25 18:47:43.656435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.976 18:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:31.976 18:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3478183f-7698-4276-9db5-d44fe0aea247 -a 10.0.0.2 -s 4420 -i 4 00:14:31.976 18:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:31.976 18:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:31.976 18:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.976 18:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:31.976 18:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:34.513 [ 0]:0x1 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a51b7d8a87bf43629eec92efc3bae961 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a51b7d8a87bf43629eec92efc3bae961 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.513 18:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:34.513 [ 0]:0x1 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a51b7d8a87bf43629eec92efc3bae961 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a51b7d8a87bf43629eec92efc3bae961 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:34.513 [ 1]:0x2 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa7ceb77500a4dcca7ad2a6321632fd9 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa7ceb77500a4dcca7ad2a6321632fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.513 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.772 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:35.032 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:35.032 18:47:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3478183f-7698-4276-9db5-d44fe0aea247 -a 10.0.0.2 -s 4420 -i 4 00:14:35.290 18:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:35.290 18:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:35.290 18:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.290 18:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:35.290 18:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:35.290 18:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.824 [ 0]:0x2 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa7ceb77500a4dcca7ad2a6321632fd9 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa7ceb77500a4dcca7ad2a6321632fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.824 [ 0]:0x1 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a51b7d8a87bf43629eec92efc3bae961 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a51b7d8a87bf43629eec92efc3bae961 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.824 [ 1]:0x2 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa7ceb77500a4dcca7ad2a6321632fd9 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa7ceb77500a4dcca7ad2a6321632fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.824 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:38.082 [ 0]:0x2 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa7ceb77500a4dcca7ad2a6321632fd9 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa7ceb77500a4dcca7ad2a6321632fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.082 18:47:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:38.342 18:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:38.342 18:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3478183f-7698-4276-9db5-d44fe0aea247 -a 10.0.0.2 -s 4420 -i 4 00:14:38.602 18:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:38.602 18:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:38.602 18:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.602 18:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:38.602 18:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:38.602 18:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:40.507 [ 0]:0x1 00:14:40.507 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:40.508 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:40.508 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a51b7d8a87bf43629eec92efc3bae961 00:14:40.508 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a51b7d8a87bf43629eec92efc3bae961 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.508 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:40.508 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:40.508 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:40.765 [ 1]:0x2 00:14:40.765 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:40.765 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:40.765 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa7ceb77500a4dcca7ad2a6321632fd9 00:14:40.765 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa7ceb77500a4dcca7ad2a6321632fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:40.765 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:41.023 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:41.024 [ 0]:0x2 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa7ceb77500a4dcca7ad2a6321632fd9 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa7ceb77500a4dcca7ad2a6321632fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:41.024 18:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:41.281 [2024-07-25 18:47:53.054663] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:41.281 request: 00:14:41.281 { 00:14:41.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.281 "nsid": 2, 00:14:41.281 "host": "nqn.2016-06.io.spdk:host1", 00:14:41.281 "method": "nvmf_ns_remove_host", 00:14:41.281 "req_id": 1 00:14:41.281 } 00:14:41.281 Got JSON-RPC error response 00:14:41.281 response: 00:14:41.281 { 00:14:41.281 "code": -32602, 00:14:41.281 "message": "Invalid parameters" 00:14:41.281 } 00:14:41.281 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:41.281 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:41.281 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:41.281 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:41.281 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:41.281 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:41.282 [ 0]:0x2 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:41.282 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.539 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=fa7ceb77500a4dcca7ad2a6321632fd9 00:14:41.539 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ fa7ceb77500a4dcca7ad2a6321632fd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.539 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:41.539 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.539 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.797 rmmod nvme_tcp 00:14:41.797 rmmod nvme_fabrics 00:14:41.797 rmmod nvme_keyring 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3490456 ']' 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3490456 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3490456 ']' 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3490456 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3490456 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3490456' 00:14:41.797 killing process with pid 3490456 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3490456 00:14:41.797 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3490456 00:14:42.055 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.055 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.055 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.055 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.055 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.055 18:47:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.055 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.055 18:47:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.962 18:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.962 00:14:43.962 real 0m16.244s 00:14:43.962 user 0m50.752s 00:14:43.962 sys 0m3.667s 00:14:43.962 18:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.962 18:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.962 ************************************ 00:14:43.962 END TEST nvmf_ns_masking 00:14:43.962 ************************************ 00:14:44.220 18:47:55 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:44.220 18:47:55 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:44.220 18:47:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:44.220 18:47:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.220 18:47:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:44.220 ************************************ 00:14:44.220 START TEST nvmf_nvme_cli 00:14:44.220 ************************************ 00:14:44.220 18:47:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:44.220 * Looking for test storage... 00:14:44.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.220 18:47:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:44.221 18:47:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:46.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:46.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:46.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:46.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.122 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:46.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:14:46.123 00:14:46.123 --- 10.0.0.2 ping statistics --- 00:14:46.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.123 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:14:46.123 00:14:46.123 --- 10.0.0.1 ping statistics --- 00:14:46.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.123 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.123 18:47:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3493881 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3493881 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3493881 ']' 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:46.381 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.381 [2024-07-25 18:47:58.062039] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:46.381 [2024-07-25 18:47:58.062166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.381 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.381 [2024-07-25 18:47:58.134100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.381 [2024-07-25 18:47:58.224837] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.381 [2024-07-25 18:47:58.224908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.381 [2024-07-25 18:47:58.224922] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.381 [2024-07-25 18:47:58.224934] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.381 [2024-07-25 18:47:58.224957] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.381 [2024-07-25 18:47:58.225009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.381 [2024-07-25 18:47:58.225037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.381 [2024-07-25 18:47:58.225093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.381 [2024-07-25 18:47:58.225097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 [2024-07-25 18:47:58.380847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 Malloc0 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 Malloc1 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 [2024-07-25 18:47:58.466744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.639 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:46.895 00:14:46.895 Discovery Log Number of Records 2, Generation counter 2 00:14:46.895 =====Discovery Log Entry 0====== 00:14:46.895 trtype: tcp 00:14:46.895 adrfam: ipv4 00:14:46.895 subtype: current discovery subsystem 00:14:46.895 treq: not required 00:14:46.895 portid: 0 00:14:46.895 trsvcid: 4420 00:14:46.895 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:46.895 traddr: 10.0.0.2 00:14:46.895 eflags: explicit discovery connections, duplicate discovery information 00:14:46.895 sectype: none 00:14:46.895 =====Discovery Log Entry 1====== 00:14:46.895 trtype: tcp 00:14:46.895 adrfam: ipv4 00:14:46.895 subtype: nvme subsystem 00:14:46.895 treq: not required 00:14:46.895 portid: 0 00:14:46.895 trsvcid: 4420 00:14:46.895 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:46.895 traddr: 10.0.0.2 00:14:46.895 eflags: none 00:14:46.895 sectype: none 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:46.895 18:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.461 18:47:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:47.461 18:47:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:47.461 18:47:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.461 18:47:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:47.461 18:47:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:47.461 18:47:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:49.984 /dev/nvme0n1 ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.984 rmmod nvme_tcp 00:14:49.984 rmmod nvme_fabrics 00:14:49.984 rmmod nvme_keyring 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3493881 ']' 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3493881 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3493881 ']' 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3493881 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3493881 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3493881' 00:14:49.984 killing process with pid 3493881 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3493881 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3493881 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.984 18:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.517 18:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.517 00:14:52.517 real 0m7.962s 00:14:52.517 user 0m14.643s 00:14:52.517 sys 0m2.117s 00:14:52.517 18:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:52.517 18:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.517 ************************************ 00:14:52.517 END TEST nvmf_nvme_cli 00:14:52.517 ************************************ 00:14:52.517 18:48:03 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:52.517 18:48:03 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:52.517 18:48:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:52.517 18:48:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:52.517 18:48:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.517 ************************************ 00:14:52.517 START TEST nvmf_vfio_user 00:14:52.517 ************************************ 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:52.517 * Looking for test storage... 00:14:52.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.517 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3494802 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3494802' 00:14:52.518 Process pid: 3494802 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3494802 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3494802 ']' 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.518 18:48:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:52.518 [2024-07-25 18:48:04.012621] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:52.518 [2024-07-25 18:48:04.012702] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.518 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.518 [2024-07-25 18:48:04.072477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.518 [2024-07-25 18:48:04.163174] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.518 [2024-07-25 18:48:04.163219] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.518 [2024-07-25 18:48:04.163234] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.518 [2024-07-25 18:48:04.163246] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.518 [2024-07-25 18:48:04.163256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.518 [2024-07-25 18:48:04.163314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.518 [2024-07-25 18:48:04.163377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.518 [2024-07-25 18:48:04.163358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.518 [2024-07-25 18:48:04.163379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.518 18:48:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:52.518 18:48:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:52.518 18:48:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:53.450 18:48:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:53.707 18:48:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:53.707 18:48:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:53.707 18:48:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.707 18:48:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:53.707 18:48:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:53.965 Malloc1 00:14:54.222 18:48:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:54.222 18:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:54.479 18:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:54.736 18:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:54.736 18:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:54.736 18:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:54.994 Malloc2 00:14:54.994 18:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:55.251 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:55.508 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:55.766 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:55.766 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:56.025 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.025 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:56.025 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:56.025 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:56.025 [2024-07-25 18:48:07.663618] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:56.025 [2024-07-25 18:48:07.663660] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495487 ] 00:14:56.025 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.025 [2024-07-25 18:48:07.698247] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:56.025 [2024-07-25 18:48:07.700763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:56.025 [2024-07-25 18:48:07.700791] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faa8b9bf000 00:14:56.025 [2024-07-25 18:48:07.701758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.025 [2024-07-25 18:48:07.702753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.025 [2024-07-25 18:48:07.703761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.025 [2024-07-25 18:48:07.704764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.025 [2024-07-25 18:48:07.705773] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.025 [2024-07-25 18:48:07.706776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.025 [2024-07-25 18:48:07.707779] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:56.025 [2024-07-25 18:48:07.708784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:56.025 [2024-07-25 18:48:07.709790] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:56.025 [2024-07-25 18:48:07.709812] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faa8a775000 00:14:56.025 [2024-07-25 18:48:07.710932] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:56.025 [2024-07-25 18:48:07.724683] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:56.025 [2024-07-25 18:48:07.724722] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:56.025 [2024-07-25 18:48:07.729892] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:56.025 [2024-07-25 18:48:07.729943] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:56.025 [2024-07-25 18:48:07.730030] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:56.025 [2024-07-25 18:48:07.730083] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:56.025 [2024-07-25 18:48:07.730096] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:56.025 [2024-07-25 18:48:07.730891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:56.025 [2024-07-25 18:48:07.730913] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:56.025 [2024-07-25 18:48:07.730926] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:56.025 [2024-07-25 18:48:07.731896] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:56.025 [2024-07-25 18:48:07.731914] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:56.025 [2024-07-25 18:48:07.731928] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:56.025 [2024-07-25 18:48:07.732899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:56.025 [2024-07-25 18:48:07.732919] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:56.025 [2024-07-25 18:48:07.733902] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:56.025 [2024-07-25 18:48:07.733920] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:56.025 [2024-07-25 18:48:07.733929] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:56.025 [2024-07-25 18:48:07.733940] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:56.025 [2024-07-25 18:48:07.734049] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:56.026 [2024-07-25 18:48:07.734057] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:56.026 [2024-07-25 18:48:07.734091] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:56.026 [2024-07-25 18:48:07.736076] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:56.026 [2024-07-25 18:48:07.736917] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:56.026 [2024-07-25 18:48:07.737919] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:56.026 [2024-07-25 18:48:07.738912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.026 [2024-07-25 18:48:07.739021] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:56.026 [2024-07-25 18:48:07.739928] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:56.026 [2024-07-25 18:48:07.739945] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:56.026 [2024-07-25 18:48:07.739954] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.739977] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:56.026 [2024-07-25 18:48:07.740000] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740028] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.026 [2024-07-25 18:48:07.740037] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.026 [2024-07-25 18:48:07.740084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740201] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:56.026 [2024-07-25 18:48:07.740211] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:56.026 [2024-07-25 18:48:07.740219] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:56.026 [2024-07-25 18:48:07.740226] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:56.026 [2024-07-25 18:48:07.740234] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:56.026 [2024-07-25 18:48:07.740242] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:56.026 [2024-07-25 18:48:07.740250] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740262] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.026 [2024-07-25 18:48:07.740332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.026 [2024-07-25 18:48:07.740360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.026 [2024-07-25 18:48:07.740381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.026 [2024-07-25 18:48:07.740390] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740405] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740458] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:56.026 [2024-07-25 18:48:07.740466] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740476] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740488] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740585] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740600] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740613] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:56.026 [2024-07-25 18:48:07.740621] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:56.026 [2024-07-25 18:48:07.740631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740659] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:56.026 [2024-07-25 18:48:07.740678] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740692] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740704] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.026 [2024-07-25 18:48:07.740712] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.026 [2024-07-25 18:48:07.740722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740762] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740776] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740787] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:56.026 [2024-07-25 18:48:07.740795] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.026 [2024-07-25 18:48:07.740805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740830] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740841] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740854] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740864] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740872] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740881] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:56.026 [2024-07-25 18:48:07.740889] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:56.026 [2024-07-25 18:48:07.740897] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:56.026 [2024-07-25 18:48:07.740926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.740974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.740989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.741001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.741017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:56.026 [2024-07-25 18:48:07.741028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:56.026 [2024-07-25 18:48:07.741072] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:56.026 [2024-07-25 18:48:07.741085] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:56.026 [2024-07-25 18:48:07.741091] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:56.027 [2024-07-25 18:48:07.741122] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:56.027 [2024-07-25 18:48:07.741134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:56.027 [2024-07-25 18:48:07.741147] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:56.027 [2024-07-25 18:48:07.741156] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:56.027 [2024-07-25 18:48:07.741166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:56.027 [2024-07-25 18:48:07.741178] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:56.027 [2024-07-25 18:48:07.741187] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:56.027 [2024-07-25 18:48:07.741196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:56.027 [2024-07-25 18:48:07.741209] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:56.027 [2024-07-25 18:48:07.741217] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:56.027 [2024-07-25 18:48:07.741226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:56.027 [2024-07-25 18:48:07.741239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:56.027 [2024-07-25 18:48:07.741260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:56.027 [2024-07-25 18:48:07.741276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:56.027 [2024-07-25 18:48:07.741293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:56.027 ===================================================== 00:14:56.027 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.027 ===================================================== 00:14:56.027 Controller Capabilities/Features 00:14:56.027 ================================ 00:14:56.027 Vendor ID: 4e58 00:14:56.027 Subsystem Vendor ID: 4e58 00:14:56.027 Serial Number: SPDK1 00:14:56.027 Model Number: SPDK bdev Controller 00:14:56.027 Firmware Version: 24.05.1 00:14:56.027 Recommended Arb Burst: 6 00:14:56.027 IEEE OUI Identifier: 8d 6b 50 00:14:56.027 Multi-path I/O 00:14:56.027 May have multiple subsystem ports: Yes 00:14:56.027 May have multiple controllers: Yes 00:14:56.027 Associated with SR-IOV VF: No 00:14:56.027 Max Data Transfer Size: 131072 00:14:56.027 Max Number of Namespaces: 32 00:14:56.027 Max Number of I/O Queues: 127 00:14:56.027 NVMe Specification Version (VS): 1.3 00:14:56.027 NVMe Specification Version (Identify): 1.3 00:14:56.027 Maximum Queue Entries: 256 00:14:56.027 Contiguous Queues Required: Yes 00:14:56.027 Arbitration Mechanisms Supported 00:14:56.027 Weighted Round Robin: Not Supported 00:14:56.027 Vendor Specific: Not Supported 00:14:56.027 Reset Timeout: 15000 ms 00:14:56.027 Doorbell Stride: 4 bytes 00:14:56.027 NVM Subsystem Reset: Not Supported 00:14:56.027 Command Sets Supported 00:14:56.027 NVM Command Set: Supported 00:14:56.027 Boot Partition: Not Supported 00:14:56.027 Memory Page Size Minimum: 4096 bytes 00:14:56.027 Memory Page Size Maximum: 4096 bytes 00:14:56.027 Persistent Memory Region: Not Supported 00:14:56.027 Optional Asynchronous Events Supported 00:14:56.027 Namespace Attribute Notices: Supported 00:14:56.027 Firmware Activation Notices: Not Supported 00:14:56.027 ANA Change Notices: Not Supported 00:14:56.027 PLE Aggregate Log Change Notices: Not Supported 00:14:56.027 LBA Status Info Alert Notices: Not Supported 00:14:56.027 EGE Aggregate Log Change Notices: Not Supported 00:14:56.027 Normal NVM Subsystem Shutdown event: Not Supported 00:14:56.027 Zone Descriptor Change Notices: Not Supported 00:14:56.027 Discovery Log Change Notices: Not Supported 00:14:56.027 Controller Attributes 00:14:56.027 128-bit Host Identifier: Supported 00:14:56.027 Non-Operational Permissive Mode: Not Supported 00:14:56.027 NVM Sets: Not Supported 00:14:56.027 Read Recovery Levels: Not Supported 00:14:56.027 Endurance Groups: Not Supported 00:14:56.027 Predictable Latency Mode: Not Supported 00:14:56.027 Traffic Based Keep ALive: Not Supported 00:14:56.027 Namespace Granularity: Not Supported 00:14:56.027 SQ Associations: Not Supported 00:14:56.027 UUID List: Not Supported 00:14:56.027 Multi-Domain Subsystem: Not Supported 00:14:56.027 Fixed Capacity Management: Not Supported 00:14:56.027 Variable Capacity Management: Not Supported 00:14:56.027 Delete Endurance Group: Not Supported 00:14:56.027 Delete NVM Set: Not Supported 00:14:56.027 Extended LBA Formats Supported: Not Supported 00:14:56.027 Flexible Data Placement Supported: Not Supported 00:14:56.027 00:14:56.027 Controller Memory Buffer Support 00:14:56.027 ================================ 00:14:56.027 Supported: No 00:14:56.027 00:14:56.027 Persistent Memory Region Support 00:14:56.027 ================================ 00:14:56.027 Supported: No 00:14:56.027 00:14:56.027 Admin Command Set Attributes 00:14:56.027 ============================ 00:14:56.027 Security Send/Receive: Not Supported 00:14:56.027 Format NVM: Not Supported 00:14:56.027 Firmware Activate/Download: Not Supported 00:14:56.027 Namespace Management: Not Supported 00:14:56.027 Device Self-Test: Not Supported 00:14:56.027 Directives: Not Supported 00:14:56.027 NVMe-MI: Not Supported 00:14:56.027 Virtualization Management: Not Supported 00:14:56.027 Doorbell Buffer Config: Not Supported 00:14:56.027 Get LBA Status Capability: Not Supported 00:14:56.027 Command & Feature Lockdown Capability: Not Supported 00:14:56.027 Abort Command Limit: 4 00:14:56.027 Async Event Request Limit: 4 00:14:56.027 Number of Firmware Slots: N/A 00:14:56.027 Firmware Slot 1 Read-Only: N/A 00:14:56.027 Firmware Activation Without Reset: N/A 00:14:56.027 Multiple Update Detection Support: N/A 00:14:56.027 Firmware Update Granularity: No Information Provided 00:14:56.027 Per-Namespace SMART Log: No 00:14:56.027 Asymmetric Namespace Access Log Page: Not Supported 00:14:56.027 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:56.027 Command Effects Log Page: Supported 00:14:56.027 Get Log Page Extended Data: Supported 00:14:56.027 Telemetry Log Pages: Not Supported 00:14:56.027 Persistent Event Log Pages: Not Supported 00:14:56.027 Supported Log Pages Log Page: May Support 00:14:56.027 Commands Supported & Effects Log Page: Not Supported 00:14:56.027 Feature Identifiers & Effects Log Page:May Support 00:14:56.027 NVMe-MI Commands & Effects Log Page: May Support 00:14:56.027 Data Area 4 for Telemetry Log: Not Supported 00:14:56.027 Error Log Page Entries Supported: 128 00:14:56.027 Keep Alive: Supported 00:14:56.027 Keep Alive Granularity: 10000 ms 00:14:56.027 00:14:56.027 NVM Command Set Attributes 00:14:56.027 ========================== 00:14:56.027 Submission Queue Entry Size 00:14:56.027 Max: 64 00:14:56.027 Min: 64 00:14:56.027 Completion Queue Entry Size 00:14:56.027 Max: 16 00:14:56.027 Min: 16 00:14:56.027 Number of Namespaces: 32 00:14:56.027 Compare Command: Supported 00:14:56.027 Write Uncorrectable Command: Not Supported 00:14:56.027 Dataset Management Command: Supported 00:14:56.027 Write Zeroes Command: Supported 00:14:56.027 Set Features Save Field: Not Supported 00:14:56.027 Reservations: Not Supported 00:14:56.027 Timestamp: Not Supported 00:14:56.027 Copy: Supported 00:14:56.027 Volatile Write Cache: Present 00:14:56.027 Atomic Write Unit (Normal): 1 00:14:56.027 Atomic Write Unit (PFail): 1 00:14:56.027 Atomic Compare & Write Unit: 1 00:14:56.027 Fused Compare & Write: Supported 00:14:56.027 Scatter-Gather List 00:14:56.027 SGL Command Set: Supported (Dword aligned) 00:14:56.027 SGL Keyed: Not Supported 00:14:56.027 SGL Bit Bucket Descriptor: Not Supported 00:14:56.027 SGL Metadata Pointer: Not Supported 00:14:56.027 Oversized SGL: Not Supported 00:14:56.027 SGL Metadata Address: Not Supported 00:14:56.027 SGL Offset: Not Supported 00:14:56.027 Transport SGL Data Block: Not Supported 00:14:56.027 Replay Protected Memory Block: Not Supported 00:14:56.027 00:14:56.027 Firmware Slot Information 00:14:56.027 ========================= 00:14:56.027 Active slot: 1 00:14:56.027 Slot 1 Firmware Revision: 24.05.1 00:14:56.027 00:14:56.027 00:14:56.027 Commands Supported and Effects 00:14:56.027 ============================== 00:14:56.028 Admin Commands 00:14:56.028 -------------- 00:14:56.028 Get Log Page (02h): Supported 00:14:56.028 Identify (06h): Supported 00:14:56.028 Abort (08h): Supported 00:14:56.028 Set Features (09h): Supported 00:14:56.028 Get Features (0Ah): Supported 00:14:56.028 Asynchronous Event Request (0Ch): Supported 00:14:56.028 Keep Alive (18h): Supported 00:14:56.028 I/O Commands 00:14:56.028 ------------ 00:14:56.028 Flush (00h): Supported LBA-Change 00:14:56.028 Write (01h): Supported LBA-Change 00:14:56.028 Read (02h): Supported 00:14:56.028 Compare (05h): Supported 00:14:56.028 Write Zeroes (08h): Supported LBA-Change 00:14:56.028 Dataset Management (09h): Supported LBA-Change 00:14:56.028 Copy (19h): Supported LBA-Change 00:14:56.028 Unknown (79h): Supported LBA-Change 00:14:56.028 Unknown (7Ah): Supported 00:14:56.028 00:14:56.028 Error Log 00:14:56.028 ========= 00:14:56.028 00:14:56.028 Arbitration 00:14:56.028 =========== 00:14:56.028 Arbitration Burst: 1 00:14:56.028 00:14:56.028 Power Management 00:14:56.028 ================ 00:14:56.028 Number of Power States: 1 00:14:56.028 Current Power State: Power State #0 00:14:56.028 Power State #0: 00:14:56.028 Max Power: 0.00 W 00:14:56.028 Non-Operational State: Operational 00:14:56.028 Entry Latency: Not Reported 00:14:56.028 Exit Latency: Not Reported 00:14:56.028 Relative Read Throughput: 0 00:14:56.028 Relative Read Latency: 0 00:14:56.028 Relative Write Throughput: 0 00:14:56.028 Relative Write Latency: 0 00:14:56.028 Idle Power: Not Reported 00:14:56.028 Active Power: Not Reported 00:14:56.028 Non-Operational Permissive Mode: Not Supported 00:14:56.028 00:14:56.028 Health Information 00:14:56.028 ================== 00:14:56.028 Critical Warnings: 00:14:56.028 Available Spare Space: OK 00:14:56.028 Temperature: OK 00:14:56.028 Device Reliability: OK 00:14:56.028 Read Only: No 00:14:56.028 Volatile Memory Backup: OK 00:14:56.028 Current Temperature: 0 Kelvin[2024-07-25 18:48:07.741461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:56.028 [2024-07-25 18:48:07.741478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:56.028 [2024-07-25 18:48:07.741516] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:56.028 [2024-07-25 18:48:07.741533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.028 [2024-07-25 18:48:07.741544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.028 [2024-07-25 18:48:07.741555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.028 [2024-07-25 18:48:07.741565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.028 [2024-07-25 18:48:07.745076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:56.028 [2024-07-25 18:48:07.745102] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:56.028 [2024-07-25 18:48:07.745960] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.028 [2024-07-25 18:48:07.746047] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:56.028 [2024-07-25 18:48:07.746085] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:56.028 [2024-07-25 18:48:07.746967] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:56.028 [2024-07-25 18:48:07.746992] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:56.028 [2024-07-25 18:48:07.747067] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:56.028 [2024-07-25 18:48:07.749006] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:56.028 (-273 Celsius) 00:14:56.028 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:56.028 Available Spare: 0% 00:14:56.028 Available Spare Threshold: 0% 00:14:56.028 Life Percentage Used: 0% 00:14:56.028 Data Units Read: 0 00:14:56.028 Data Units Written: 0 00:14:56.028 Host Read Commands: 0 00:14:56.028 Host Write Commands: 0 00:14:56.028 Controller Busy Time: 0 minutes 00:14:56.028 Power Cycles: 0 00:14:56.028 Power On Hours: 0 hours 00:14:56.028 Unsafe Shutdowns: 0 00:14:56.028 Unrecoverable Media Errors: 0 00:14:56.028 Lifetime Error Log Entries: 0 00:14:56.028 Warning Temperature Time: 0 minutes 00:14:56.028 Critical Temperature Time: 0 minutes 00:14:56.028 00:14:56.028 Number of Queues 00:14:56.028 ================ 00:14:56.028 Number of I/O Submission Queues: 127 00:14:56.028 Number of I/O Completion Queues: 127 00:14:56.028 00:14:56.028 Active Namespaces 00:14:56.028 ================= 00:14:56.028 Namespace ID:1 00:14:56.028 Error Recovery Timeout: Unlimited 00:14:56.028 Command Set Identifier: NVM (00h) 00:14:56.028 Deallocate: Supported 00:14:56.028 Deallocated/Unwritten Error: Not Supported 00:14:56.028 Deallocated Read Value: Unknown 00:14:56.028 Deallocate in Write Zeroes: Not Supported 00:14:56.028 Deallocated Guard Field: 0xFFFF 00:14:56.028 Flush: Supported 00:14:56.028 Reservation: Supported 00:14:56.028 Namespace Sharing Capabilities: Multiple Controllers 00:14:56.028 Size (in LBAs): 131072 (0GiB) 00:14:56.028 Capacity (in LBAs): 131072 (0GiB) 00:14:56.028 Utilization (in LBAs): 131072 (0GiB) 00:14:56.028 NGUID: 44C1FA3ADC754E4E9C6782BBD68B474D 00:14:56.028 UUID: 44c1fa3a-dc75-4e4e-9c67-82bbd68b474d 00:14:56.028 Thin Provisioning: Not Supported 00:14:56.028 Per-NS Atomic Units: Yes 00:14:56.028 Atomic Boundary Size (Normal): 0 00:14:56.028 Atomic Boundary Size (PFail): 0 00:14:56.028 Atomic Boundary Offset: 0 00:14:56.028 Maximum Single Source Range Length: 65535 00:14:56.028 Maximum Copy Length: 65535 00:14:56.028 Maximum Source Range Count: 1 00:14:56.028 NGUID/EUI64 Never Reused: No 00:14:56.028 Namespace Write Protected: No 00:14:56.028 Number of LBA Formats: 1 00:14:56.028 Current LBA Format: LBA Format #00 00:14:56.028 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:56.028 00:14:56.028 18:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:56.028 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.286 [2024-07-25 18:48:07.978877] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.545 Initializing NVMe Controllers 00:15:01.545 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:01.545 Initialization complete. Launching workers. 00:15:01.545 ======================================================== 00:15:01.545 Latency(us) 00:15:01.545 Device Information : IOPS MiB/s Average min max 00:15:01.545 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35840.75 140.00 3570.57 1172.36 7582.64 00:15:01.545 ======================================================== 00:15:01.545 Total : 35840.75 140.00 3570.57 1172.36 7582.64 00:15:01.545 00:15:01.545 [2024-07-25 18:48:12.998141] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.545 18:48:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:01.545 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.545 [2024-07-25 18:48:13.233331] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.805 Initializing NVMe Controllers 00:15:06.805 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.805 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:06.805 Initialization complete. Launching workers. 00:15:06.805 ======================================================== 00:15:06.805 Latency(us) 00:15:06.805 Device Information : IOPS MiB/s Average min max 00:15:06.805 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15967.91 62.37 8021.34 4970.75 15974.22 00:15:06.805 ======================================================== 00:15:06.805 Total : 15967.91 62.37 8021.34 4970.75 15974.22 00:15:06.805 00:15:06.805 [2024-07-25 18:48:18.278832] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.805 18:48:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:06.805 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.806 [2024-07-25 18:48:18.488881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.068 [2024-07-25 18:48:23.562416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.068 Initializing NVMe Controllers 00:15:12.068 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.068 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.068 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:12.068 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:12.068 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:12.068 Initialization complete. Launching workers. 00:15:12.068 Starting thread on core 2 00:15:12.068 Starting thread on core 3 00:15:12.068 Starting thread on core 1 00:15:12.068 18:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:12.068 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.068 [2024-07-25 18:48:23.864517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.374 [2024-07-25 18:48:27.037667] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.374 Initializing NVMe Controllers 00:15:15.374 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.374 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.374 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:15.374 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:15.374 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:15.374 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:15.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:15.374 Initialization complete. Launching workers. 00:15:15.374 Starting thread on core 1 with urgent priority queue 00:15:15.374 Starting thread on core 2 with urgent priority queue 00:15:15.374 Starting thread on core 3 with urgent priority queue 00:15:15.374 Starting thread on core 0 with urgent priority queue 00:15:15.374 SPDK bdev Controller (SPDK1 ) core 0: 5412.33 IO/s 18.48 secs/100000 ios 00:15:15.374 SPDK bdev Controller (SPDK1 ) core 1: 5161.00 IO/s 19.38 secs/100000 ios 00:15:15.374 SPDK bdev Controller (SPDK1 ) core 2: 5781.00 IO/s 17.30 secs/100000 ios 00:15:15.374 SPDK bdev Controller (SPDK1 ) core 3: 5911.33 IO/s 16.92 secs/100000 ios 00:15:15.374 ======================================================== 00:15:15.374 00:15:15.374 18:48:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:15.374 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.631 [2024-07-25 18:48:27.340612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.631 Initializing NVMe Controllers 00:15:15.631 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.631 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.631 Namespace ID: 1 size: 0GB 00:15:15.631 Initialization complete. 00:15:15.631 INFO: using host memory buffer for IO 00:15:15.631 Hello world! 00:15:15.631 [2024-07-25 18:48:27.375808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.632 18:48:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:15.632 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.889 [2024-07-25 18:48:27.658079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.821 Initializing NVMe Controllers 00:15:16.821 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.821 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.821 Initialization complete. Launching workers. 00:15:16.821 submit (in ns) avg, min, max = 8580.0, 3501.1, 7010708.9 00:15:16.821 complete (in ns) avg, min, max = 26284.9, 2068.9, 4998060.0 00:15:16.821 00:15:16.821 Submit histogram 00:15:16.821 ================ 00:15:16.821 Range in us Cumulative Count 00:15:16.821 3.484 - 3.508: 0.0449% ( 6) 00:15:16.821 3.508 - 3.532: 0.4043% ( 48) 00:15:16.821 3.532 - 3.556: 1.4374% ( 138) 00:15:16.821 3.556 - 3.579: 4.2749% ( 379) 00:15:16.821 3.579 - 3.603: 9.8750% ( 748) 00:15:16.821 3.603 - 3.627: 18.3874% ( 1137) 00:15:16.821 3.627 - 3.650: 28.2848% ( 1322) 00:15:16.821 3.650 - 3.674: 38.4817% ( 1362) 00:15:16.821 3.674 - 3.698: 46.1855% ( 1029) 00:15:16.821 3.698 - 3.721: 52.3396% ( 822) 00:15:16.821 3.721 - 3.745: 56.2776% ( 526) 00:15:16.821 3.745 - 3.769: 60.0135% ( 499) 00:15:16.821 3.769 - 3.793: 63.5397% ( 471) 00:15:16.821 3.793 - 3.816: 66.6766% ( 419) 00:15:16.821 3.816 - 3.840: 69.8435% ( 423) 00:15:16.821 3.840 - 3.864: 74.0361% ( 560) 00:15:16.821 3.864 - 3.887: 78.3784% ( 580) 00:15:16.821 3.887 - 3.911: 82.0319% ( 488) 00:15:16.821 3.911 - 3.935: 84.8169% ( 372) 00:15:16.821 3.935 - 3.959: 86.6662% ( 247) 00:15:16.821 3.959 - 3.982: 88.2833% ( 216) 00:15:16.821 3.982 - 4.006: 89.7657% ( 198) 00:15:16.821 4.006 - 4.030: 90.8737% ( 148) 00:15:16.821 4.030 - 4.053: 91.8320% ( 128) 00:15:16.821 4.053 - 4.077: 92.5582% ( 97) 00:15:16.821 4.077 - 4.101: 93.4416% ( 118) 00:15:16.821 4.101 - 4.124: 94.1978% ( 101) 00:15:16.821 4.124 - 4.148: 94.8941% ( 93) 00:15:16.822 4.148 - 4.172: 95.3807% ( 65) 00:15:16.822 4.172 - 4.196: 95.8149% ( 58) 00:15:16.822 4.196 - 4.219: 96.1443% ( 44) 00:15:16.822 4.219 - 4.243: 96.3615% ( 29) 00:15:16.822 4.243 - 4.267: 96.5411% ( 24) 00:15:16.822 4.267 - 4.290: 96.7133% ( 23) 00:15:16.822 4.290 - 4.314: 96.9155% ( 27) 00:15:16.822 4.314 - 4.338: 96.9679% ( 7) 00:15:16.822 4.338 - 4.361: 97.0652% ( 13) 00:15:16.822 4.361 - 4.385: 97.1700% ( 14) 00:15:16.822 4.385 - 4.409: 97.2524% ( 11) 00:15:16.822 4.409 - 4.433: 97.3123% ( 8) 00:15:16.822 4.433 - 4.456: 97.3422% ( 4) 00:15:16.822 4.456 - 4.480: 97.3497% ( 1) 00:15:16.822 4.480 - 4.504: 97.4021% ( 7) 00:15:16.822 4.504 - 4.527: 97.4395% ( 5) 00:15:16.822 4.527 - 4.551: 97.4620% ( 3) 00:15:16.822 4.551 - 4.575: 97.4920% ( 4) 00:15:16.822 4.575 - 4.599: 97.4994% ( 1) 00:15:16.822 4.599 - 4.622: 97.5069% ( 1) 00:15:16.822 4.622 - 4.646: 97.5219% ( 2) 00:15:16.822 4.646 - 4.670: 97.5294% ( 1) 00:15:16.822 4.670 - 4.693: 97.5444% ( 2) 00:15:16.822 4.693 - 4.717: 97.5518% ( 1) 00:15:16.822 4.717 - 4.741: 97.5968% ( 6) 00:15:16.822 4.741 - 4.764: 97.6342% ( 5) 00:15:16.822 4.764 - 4.788: 97.7091% ( 10) 00:15:16.822 4.788 - 4.812: 97.7240% ( 2) 00:15:16.822 4.812 - 4.836: 97.7764% ( 7) 00:15:16.822 4.836 - 4.859: 97.8139% ( 5) 00:15:16.822 4.859 - 4.883: 97.8738% ( 8) 00:15:16.822 4.883 - 4.907: 97.9486% ( 10) 00:15:16.822 4.907 - 4.930: 97.9786% ( 4) 00:15:16.822 4.930 - 4.954: 98.0235% ( 6) 00:15:16.822 4.954 - 4.978: 98.0385% ( 2) 00:15:16.822 4.978 - 5.001: 98.0535% ( 2) 00:15:16.822 5.001 - 5.025: 98.0834% ( 4) 00:15:16.822 5.025 - 5.049: 98.1133% ( 4) 00:15:16.822 5.049 - 5.073: 98.1283% ( 2) 00:15:16.822 5.073 - 5.096: 98.1732% ( 6) 00:15:16.822 5.096 - 5.120: 98.2032% ( 4) 00:15:16.822 5.120 - 5.144: 98.2107% ( 1) 00:15:16.822 5.144 - 5.167: 98.2331% ( 3) 00:15:16.822 5.167 - 5.191: 98.2406% ( 1) 00:15:16.822 5.191 - 5.215: 98.2706% ( 4) 00:15:16.822 5.239 - 5.262: 98.2930% ( 3) 00:15:16.822 5.262 - 5.286: 98.3080% ( 2) 00:15:16.822 5.286 - 5.310: 98.3155% ( 1) 00:15:16.822 5.310 - 5.333: 98.3230% ( 1) 00:15:16.822 5.333 - 5.357: 98.3305% ( 1) 00:15:16.822 5.570 - 5.594: 98.3454% ( 2) 00:15:16.822 5.594 - 5.618: 98.3529% ( 1) 00:15:16.822 5.641 - 5.665: 98.3604% ( 1) 00:15:16.822 5.689 - 5.713: 98.3679% ( 1) 00:15:16.822 5.713 - 5.736: 98.3754% ( 1) 00:15:16.822 5.760 - 5.784: 98.3904% ( 2) 00:15:16.822 5.831 - 5.855: 98.3978% ( 1) 00:15:16.822 5.973 - 5.997: 98.4053% ( 1) 00:15:16.822 6.044 - 6.068: 98.4128% ( 1) 00:15:16.822 6.163 - 6.210: 98.4203% ( 1) 00:15:16.822 6.353 - 6.400: 98.4278% ( 1) 00:15:16.822 6.400 - 6.447: 98.4428% ( 2) 00:15:16.822 6.542 - 6.590: 98.4503% ( 1) 00:15:16.822 6.637 - 6.684: 98.4577% ( 1) 00:15:16.822 6.732 - 6.779: 98.4727% ( 2) 00:15:16.822 6.827 - 6.874: 98.4802% ( 1) 00:15:16.822 6.921 - 6.969: 98.4877% ( 1) 00:15:16.822 7.064 - 7.111: 98.4952% ( 1) 00:15:16.822 7.111 - 7.159: 98.5027% ( 1) 00:15:16.822 7.206 - 7.253: 98.5101% ( 1) 00:15:16.822 7.301 - 7.348: 98.5176% ( 1) 00:15:16.822 7.490 - 7.538: 98.5326% ( 2) 00:15:16.822 7.775 - 7.822: 98.5401% ( 1) 00:15:16.822 7.822 - 7.870: 98.5476% ( 1) 00:15:16.822 8.154 - 8.201: 98.5626% ( 2) 00:15:16.822 8.201 - 8.249: 98.5700% ( 1) 00:15:16.822 8.344 - 8.391: 98.5925% ( 3) 00:15:16.822 8.391 - 8.439: 98.6000% ( 1) 00:15:16.822 8.486 - 8.533: 98.6075% ( 1) 00:15:16.822 8.533 - 8.581: 98.6299% ( 3) 00:15:16.822 8.581 - 8.628: 98.6374% ( 1) 00:15:16.822 8.676 - 8.723: 98.6524% ( 2) 00:15:16.822 8.723 - 8.770: 98.6599% ( 1) 00:15:16.822 8.818 - 8.865: 98.6674% ( 1) 00:15:16.822 8.865 - 8.913: 98.6823% ( 2) 00:15:16.822 8.913 - 8.960: 98.6898% ( 1) 00:15:16.822 8.960 - 9.007: 98.6973% ( 1) 00:15:16.822 9.007 - 9.055: 98.7048% ( 1) 00:15:16.822 9.055 - 9.102: 98.7123% ( 1) 00:15:16.822 9.292 - 9.339: 98.7273% ( 2) 00:15:16.822 9.387 - 9.434: 98.7347% ( 1) 00:15:16.822 9.434 - 9.481: 98.7572% ( 3) 00:15:16.822 9.529 - 9.576: 98.7647% ( 1) 00:15:16.822 9.624 - 9.671: 98.7797% ( 2) 00:15:16.822 9.719 - 9.766: 98.7872% ( 1) 00:15:16.822 9.766 - 9.813: 98.7946% ( 1) 00:15:16.822 9.813 - 9.861: 98.8021% ( 1) 00:15:16.822 10.145 - 10.193: 98.8096% ( 1) 00:15:16.822 10.240 - 10.287: 98.8171% ( 1) 00:15:16.822 10.382 - 10.430: 98.8246% ( 1) 00:15:16.822 10.430 - 10.477: 98.8321% ( 1) 00:15:16.822 10.619 - 10.667: 98.8396% ( 1) 00:15:16.822 10.714 - 10.761: 98.8545% ( 2) 00:15:16.822 10.809 - 10.856: 98.8620% ( 1) 00:15:16.822 10.999 - 11.046: 98.8695% ( 1) 00:15:16.822 11.046 - 11.093: 98.8770% ( 1) 00:15:16.822 11.425 - 11.473: 98.8845% ( 1) 00:15:16.822 11.615 - 11.662: 98.8920% ( 1) 00:15:16.822 11.804 - 11.852: 98.8995% ( 1) 00:15:16.822 11.947 - 11.994: 98.9069% ( 1) 00:15:16.822 11.994 - 12.041: 98.9144% ( 1) 00:15:16.822 12.136 - 12.231: 98.9219% ( 1) 00:15:16.822 12.231 - 12.326: 98.9294% ( 1) 00:15:16.822 12.516 - 12.610: 98.9519% ( 3) 00:15:16.822 12.610 - 12.705: 98.9668% ( 2) 00:15:16.822 13.084 - 13.179: 98.9743% ( 1) 00:15:16.822 13.179 - 13.274: 98.9818% ( 1) 00:15:16.822 13.369 - 13.464: 98.9893% ( 1) 00:15:16.822 13.464 - 13.559: 98.9968% ( 1) 00:15:16.822 13.559 - 13.653: 99.0043% ( 1) 00:15:16.822 13.653 - 13.748: 99.0118% ( 1) 00:15:16.822 13.748 - 13.843: 99.0192% ( 1) 00:15:16.822 13.843 - 13.938: 99.0267% ( 1) 00:15:16.822 13.938 - 14.033: 99.0342% ( 1) 00:15:16.822 14.033 - 14.127: 99.0417% ( 1) 00:15:16.822 14.222 - 14.317: 99.0492% ( 1) 00:15:16.822 14.507 - 14.601: 99.0567% ( 1) 00:15:16.822 14.696 - 14.791: 99.0716% ( 2) 00:15:16.822 14.981 - 15.076: 99.0791% ( 1) 00:15:16.822 15.455 - 15.550: 99.0866% ( 1) 00:15:16.822 15.739 - 15.834: 99.0941% ( 1) 00:15:16.822 17.161 - 17.256: 99.1091% ( 2) 00:15:16.822 17.256 - 17.351: 99.1241% ( 2) 00:15:16.822 17.351 - 17.446: 99.1540% ( 4) 00:15:16.822 17.446 - 17.541: 99.1765% ( 3) 00:15:16.822 17.541 - 17.636: 99.1914% ( 2) 00:15:16.822 17.636 - 17.730: 99.2438% ( 7) 00:15:16.822 17.730 - 17.825: 99.3037% ( 8) 00:15:16.822 17.825 - 17.920: 99.3337% ( 4) 00:15:16.822 17.920 - 18.015: 99.3636% ( 4) 00:15:16.822 18.015 - 18.110: 99.4160% ( 7) 00:15:16.822 18.110 - 18.204: 99.4535% ( 5) 00:15:16.822 18.204 - 18.299: 99.5209% ( 9) 00:15:16.822 18.299 - 18.394: 99.5508% ( 4) 00:15:16.822 18.394 - 18.489: 99.5807% ( 4) 00:15:16.822 18.489 - 18.584: 99.6182% ( 5) 00:15:16.822 18.584 - 18.679: 99.6481% ( 4) 00:15:16.822 18.679 - 18.773: 99.6781% ( 4) 00:15:16.822 18.773 - 18.868: 99.7455% ( 9) 00:15:16.822 18.868 - 18.963: 99.7829% ( 5) 00:15:16.822 18.963 - 19.058: 99.7904% ( 1) 00:15:16.822 19.058 - 19.153: 99.8128% ( 3) 00:15:16.822 19.153 - 19.247: 99.8203% ( 1) 00:15:16.822 20.859 - 20.954: 99.8278% ( 1) 00:15:16.822 21.523 - 21.618: 99.8353% ( 1) 00:15:16.822 22.756 - 22.850: 99.8428% ( 1) 00:15:16.822 23.419 - 23.514: 99.8503% ( 1) 00:15:16.822 24.462 - 24.652: 99.8578% ( 1) 00:15:16.822 24.652 - 24.841: 99.8652% ( 1) 00:15:16.822 27.307 - 27.496: 99.8727% ( 1) 00:15:16.822 28.824 - 29.013: 99.8802% ( 1) 00:15:16.822 29.393 - 29.582: 99.8877% ( 1) 00:15:16.822 33.944 - 34.133: 99.8952% ( 1) 00:15:16.822 1025.517 - 1031.585: 99.9027% ( 1) 00:15:16.822 3980.705 - 4004.978: 99.9551% ( 7) 00:15:16.822 4004.978 - 4029.250: 99.9775% ( 3) 00:15:16.822 6990.507 - 7039.052: 100.0000% ( 3) 00:15:16.822 00:15:16.822 Complete histogram 00:15:16.822 ================== 00:15:16.822 Range in us Cumulative Count 00:15:16.822 2.062 - 2.074: 0.4642% ( 62) 00:15:16.822 2.074 - 2.086: 26.5629% ( 3486) 00:15:16.822 2.086 - 2.098: 39.0657% ( 1670) 00:15:16.822 2.098 - 2.110: 42.6144% ( 474) 00:15:16.822 2.110 - 2.121: 54.3460% ( 1567) 00:15:16.822 2.121 - 2.133: 58.9354% ( 613) 00:15:16.822 2.133 - 2.145: 62.7461% ( 509) 00:15:16.822 2.145 - 2.157: 71.7976% ( 1209) 00:15:16.822 2.157 - 2.169: 74.5452% ( 367) 00:15:16.822 2.169 - 2.181: 76.0874% ( 206) 00:15:16.822 2.181 - 2.193: 79.6287% ( 473) 00:15:16.822 2.193 - 2.204: 81.4479% ( 243) 00:15:16.822 2.204 - 2.216: 82.5709% ( 150) 00:15:16.823 2.216 - 2.228: 86.6886% ( 550) 00:15:16.823 2.228 - 2.240: 88.9496% ( 302) 00:15:16.823 2.240 - 2.252: 90.8737% ( 257) 00:15:16.823 2.252 - 2.264: 92.6855% ( 242) 00:15:16.823 2.264 - 2.276: 93.5315% ( 113) 00:15:16.823 2.276 - 2.287: 93.9732% ( 59) 00:15:16.823 2.287 - 2.299: 94.3400% ( 49) 00:15:16.823 2.299 - 2.311: 94.7967% ( 61) 00:15:16.823 2.311 - 2.323: 95.3133% ( 69) 00:15:16.823 2.323 - 2.335: 95.5604% ( 33) 00:15:16.823 2.335 - 2.347: 95.6427% ( 11) 00:15:16.823 2.347 - 2.359: 95.6877% ( 6) 00:15:16.823 2.359 - 2.370: 95.7550% ( 9) 00:15:16.823 2.370 - 2.382: 95.8074% ( 7) 00:15:16.823 2.382 - 2.394: 95.9048% ( 13) 00:15:16.823 2.394 - 2.406: 96.1518% ( 33) 00:15:16.823 2.406 - 2.418: 96.3839% ( 31) 00:15:16.823 2.418 - 2.430: 96.5711% ( 25) 00:15:16.823 2.430 - 2.441: 96.7283% ( 21) 00:15:16.823 2.441 - 2.453: 96.8631% ( 18) 00:15:16.823 2.453 - 2.465: 97.0652% ( 27) 00:15:16.823 2.465 - 2.477: 97.2449% ( 24) 00:15:16.823 2.477 - 2.489: 97.4545% ( 28) 00:15:16.823 2.489 - 2.501: 97.6192% ( 22) 00:15:16.823 2.501 - 2.513: 97.7240% ( 14) 00:15:16.823 2.513 - 2.524: 97.8513% ( 17) 00:15:16.823 2.524 - 2.536: 97.9786% ( 17) 00:15:16.823 2.536 - 2.548: 98.0684% ( 12) 00:15:16.823 2.548 - 2.560: 98.1358% ( 9) 00:15:16.823 2.560 - 2.572: 98.2406% ( 14) 00:15:16.823 2.572 - 2.584: 98.3080% ( 9) 00:15:16.823 2.584 - 2.596: 98.3454% ( 5) 00:15:16.823 2.596 - 2.607: 98.3679% ( 3) 00:15:16.823 2.607 - 2.619: 98.3829% ( 2) 00:15:16.823 2.619 - 2.631: 98.4203% ( 5) 00:15:16.823 2.631 - 2.643: 98.4428% ( 3) 00:15:16.823 2.643 - 2.655: 98.4577% ( 2) 00:15:16.823 2.690 - 2.702: 98.4652% ( 1) 00:15:16.823 2.714 - 2.726: 98.4727% ( 1) 00:15:16.823 2.750 - 2.761: 98.4802% ( 1) 00:15:16.823 2.761 - 2.773: 98.4877% ( 1) 00:15:16.823 2.809 - 2.821: 98.4952% ( 1) 00:15:16.823 2.987 - 2.999: 98.5027% ( 1) 00:15:16.823 3.081 - 3.105: 9[2024-07-25 18:48:28.684324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.080 8.5101% ( 1) 00:15:17.080 3.224 - 3.247: 98.5176% ( 1) 00:15:17.080 3.342 - 3.366: 98.5326% ( 2) 00:15:17.080 3.366 - 3.390: 98.5775% ( 6) 00:15:17.080 3.413 - 3.437: 98.5850% ( 1) 00:15:17.080 3.461 - 3.484: 98.6000% ( 2) 00:15:17.080 3.508 - 3.532: 98.6075% ( 1) 00:15:17.080 3.532 - 3.556: 98.6299% ( 3) 00:15:17.080 3.603 - 3.627: 98.6449% ( 2) 00:15:17.080 3.650 - 3.674: 98.6524% ( 1) 00:15:17.080 3.674 - 3.698: 98.6674% ( 2) 00:15:17.080 3.769 - 3.793: 98.6749% ( 1) 00:15:17.080 3.864 - 3.887: 98.6898% ( 2) 00:15:17.080 4.006 - 4.030: 98.6973% ( 1) 00:15:17.080 5.689 - 5.713: 98.7048% ( 1) 00:15:17.080 5.879 - 5.902: 98.7123% ( 1) 00:15:17.080 6.258 - 6.305: 98.7198% ( 1) 00:15:17.080 6.305 - 6.353: 98.7273% ( 1) 00:15:17.080 6.590 - 6.637: 98.7347% ( 1) 00:15:17.080 6.732 - 6.779: 98.7422% ( 1) 00:15:17.080 6.874 - 6.921: 98.7497% ( 1) 00:15:17.080 7.253 - 7.301: 98.7572% ( 1) 00:15:17.080 7.490 - 7.538: 98.7647% ( 1) 00:15:17.080 7.633 - 7.680: 98.7722% ( 1) 00:15:17.080 8.012 - 8.059: 98.7797% ( 1) 00:15:17.080 8.154 - 8.201: 98.7872% ( 1) 00:15:17.080 8.201 - 8.249: 98.7946% ( 1) 00:15:17.080 14.412 - 14.507: 98.8021% ( 1) 00:15:17.080 15.455 - 15.550: 98.8171% ( 2) 00:15:17.080 15.644 - 15.739: 98.8321% ( 2) 00:15:17.080 15.739 - 15.834: 98.8545% ( 3) 00:15:17.080 15.834 - 15.929: 98.8770% ( 3) 00:15:17.080 15.929 - 16.024: 98.8845% ( 1) 00:15:17.080 16.024 - 16.119: 98.9069% ( 3) 00:15:17.080 16.119 - 16.213: 98.9444% ( 5) 00:15:17.080 16.213 - 16.308: 98.9743% ( 4) 00:15:17.080 16.308 - 16.403: 98.9818% ( 1) 00:15:17.080 16.403 - 16.498: 99.0192% ( 5) 00:15:17.080 16.498 - 16.593: 99.0567% ( 5) 00:15:17.080 16.593 - 16.687: 99.1315% ( 10) 00:15:17.080 16.687 - 16.782: 99.1765% ( 6) 00:15:17.080 16.782 - 16.877: 99.1914% ( 2) 00:15:17.080 16.877 - 16.972: 99.2214% ( 4) 00:15:17.080 16.972 - 17.067: 99.2588% ( 5) 00:15:17.081 17.067 - 17.161: 99.2888% ( 4) 00:15:17.081 17.161 - 17.256: 99.3187% ( 4) 00:15:17.081 17.351 - 17.446: 99.3337% ( 2) 00:15:17.081 17.446 - 17.541: 99.3412% ( 1) 00:15:17.081 17.541 - 17.636: 99.3487% ( 1) 00:15:17.081 17.636 - 17.730: 99.3636% ( 2) 00:15:17.081 17.730 - 17.825: 99.3711% ( 1) 00:15:17.081 19.911 - 20.006: 99.3786% ( 1) 00:15:17.081 22.566 - 22.661: 99.3861% ( 1) 00:15:17.081 1013.381 - 1019.449: 99.3936% ( 1) 00:15:17.081 1031.585 - 1037.653: 99.4011% ( 1) 00:15:17.081 2148.124 - 2160.261: 99.4085% ( 1) 00:15:17.081 3980.705 - 4004.978: 99.8802% ( 63) 00:15:17.081 4004.978 - 4029.250: 99.9925% ( 15) 00:15:17.081 4975.881 - 5000.154: 100.0000% ( 1) 00:15:17.081 00:15:17.081 18:48:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:17.081 18:48:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:17.081 18:48:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:17.081 18:48:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:17.081 18:48:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.338 [ 00:15:17.338 { 00:15:17.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.338 "subtype": "Discovery", 00:15:17.338 "listen_addresses": [], 00:15:17.338 "allow_any_host": true, 00:15:17.338 "hosts": [] 00:15:17.338 }, 00:15:17.338 { 00:15:17.338 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:17.338 "subtype": "NVMe", 00:15:17.338 "listen_addresses": [ 00:15:17.338 { 00:15:17.338 "trtype": "VFIOUSER", 00:15:17.338 "adrfam": "IPv4", 00:15:17.338 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:17.338 "trsvcid": "0" 00:15:17.338 } 00:15:17.338 ], 00:15:17.338 "allow_any_host": true, 00:15:17.338 "hosts": [], 00:15:17.338 "serial_number": "SPDK1", 00:15:17.338 "model_number": "SPDK bdev Controller", 00:15:17.338 "max_namespaces": 32, 00:15:17.338 "min_cntlid": 1, 00:15:17.338 "max_cntlid": 65519, 00:15:17.338 "namespaces": [ 00:15:17.338 { 00:15:17.338 "nsid": 1, 00:15:17.338 "bdev_name": "Malloc1", 00:15:17.338 "name": "Malloc1", 00:15:17.338 "nguid": "44C1FA3ADC754E4E9C6782BBD68B474D", 00:15:17.338 "uuid": "44c1fa3a-dc75-4e4e-9c67-82bbd68b474d" 00:15:17.338 } 00:15:17.338 ] 00:15:17.338 }, 00:15:17.338 { 00:15:17.338 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:17.338 "subtype": "NVMe", 00:15:17.338 "listen_addresses": [ 00:15:17.338 { 00:15:17.338 "trtype": "VFIOUSER", 00:15:17.338 "adrfam": "IPv4", 00:15:17.338 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:17.338 "trsvcid": "0" 00:15:17.338 } 00:15:17.338 ], 00:15:17.338 "allow_any_host": true, 00:15:17.338 "hosts": [], 00:15:17.339 "serial_number": "SPDK2", 00:15:17.339 "model_number": "SPDK bdev Controller", 00:15:17.339 "max_namespaces": 32, 00:15:17.339 "min_cntlid": 1, 00:15:17.339 "max_cntlid": 65519, 00:15:17.339 "namespaces": [ 00:15:17.339 { 00:15:17.339 "nsid": 1, 00:15:17.339 "bdev_name": "Malloc2", 00:15:17.339 "name": "Malloc2", 00:15:17.339 "nguid": "2FA2DD1EC4BC4228A90E97A849C4EC9F", 00:15:17.339 "uuid": "2fa2dd1e-c4bc-4228-a90e-97a849c4ec9f" 00:15:17.339 } 00:15:17.339 ] 00:15:17.339 } 00:15:17.339 ] 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3498250 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:17.339 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:17.339 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.339 [2024-07-25 18:48:29.168580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.597 Malloc3 00:15:17.597 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:17.854 [2024-07-25 18:48:29.522205] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.854 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.854 Asynchronous Event Request test 00:15:17.854 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.854 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.854 Registering asynchronous event callbacks... 00:15:17.854 Starting namespace attribute notice tests for all controllers... 00:15:17.854 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:17.854 aer_cb - Changed Namespace 00:15:17.854 Cleaning up... 00:15:18.113 [ 00:15:18.113 { 00:15:18.113 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.114 "subtype": "Discovery", 00:15:18.114 "listen_addresses": [], 00:15:18.114 "allow_any_host": true, 00:15:18.114 "hosts": [] 00:15:18.114 }, 00:15:18.114 { 00:15:18.114 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.114 "subtype": "NVMe", 00:15:18.114 "listen_addresses": [ 00:15:18.114 { 00:15:18.114 "trtype": "VFIOUSER", 00:15:18.114 "adrfam": "IPv4", 00:15:18.114 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.114 "trsvcid": "0" 00:15:18.114 } 00:15:18.114 ], 00:15:18.114 "allow_any_host": true, 00:15:18.114 "hosts": [], 00:15:18.114 "serial_number": "SPDK1", 00:15:18.114 "model_number": "SPDK bdev Controller", 00:15:18.114 "max_namespaces": 32, 00:15:18.114 "min_cntlid": 1, 00:15:18.114 "max_cntlid": 65519, 00:15:18.114 "namespaces": [ 00:15:18.114 { 00:15:18.114 "nsid": 1, 00:15:18.114 "bdev_name": "Malloc1", 00:15:18.114 "name": "Malloc1", 00:15:18.114 "nguid": "44C1FA3ADC754E4E9C6782BBD68B474D", 00:15:18.114 "uuid": "44c1fa3a-dc75-4e4e-9c67-82bbd68b474d" 00:15:18.114 }, 00:15:18.114 { 00:15:18.114 "nsid": 2, 00:15:18.114 "bdev_name": "Malloc3", 00:15:18.114 "name": "Malloc3", 00:15:18.114 "nguid": "1803A1E0E7CA4D4A844E57ADB08F8897", 00:15:18.114 "uuid": "1803a1e0-e7ca-4d4a-844e-57adb08f8897" 00:15:18.114 } 00:15:18.114 ] 00:15:18.114 }, 00:15:18.114 { 00:15:18.114 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.114 "subtype": "NVMe", 00:15:18.114 "listen_addresses": [ 00:15:18.114 { 00:15:18.114 "trtype": "VFIOUSER", 00:15:18.114 "adrfam": "IPv4", 00:15:18.114 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.114 "trsvcid": "0" 00:15:18.114 } 00:15:18.114 ], 00:15:18.114 "allow_any_host": true, 00:15:18.114 "hosts": [], 00:15:18.114 "serial_number": "SPDK2", 00:15:18.114 "model_number": "SPDK bdev Controller", 00:15:18.114 "max_namespaces": 32, 00:15:18.114 "min_cntlid": 1, 00:15:18.114 "max_cntlid": 65519, 00:15:18.114 "namespaces": [ 00:15:18.114 { 00:15:18.114 "nsid": 1, 00:15:18.114 "bdev_name": "Malloc2", 00:15:18.114 "name": "Malloc2", 00:15:18.114 "nguid": "2FA2DD1EC4BC4228A90E97A849C4EC9F", 00:15:18.114 "uuid": "2fa2dd1e-c4bc-4228-a90e-97a849c4ec9f" 00:15:18.114 } 00:15:18.114 ] 00:15:18.114 } 00:15:18.114 ] 00:15:18.114 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3498250 00:15:18.114 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.114 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:18.114 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:18.114 18:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:18.114 [2024-07-25 18:48:29.794340] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:18.114 [2024-07-25 18:48:29.794385] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498373 ] 00:15:18.114 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.114 [2024-07-25 18:48:29.828199] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:18.114 [2024-07-25 18:48:29.830576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.114 [2024-07-25 18:48:29.830609] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f73d2bba000 00:15:18.114 [2024-07-25 18:48:29.831577] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.114 [2024-07-25 18:48:29.832578] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.114 [2024-07-25 18:48:29.833582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.114 [2024-07-25 18:48:29.834585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.114 [2024-07-25 18:48:29.835594] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.114 [2024-07-25 18:48:29.836603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.114 [2024-07-25 18:48:29.837609] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.114 [2024-07-25 18:48:29.838610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.114 [2024-07-25 18:48:29.839621] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.114 [2024-07-25 18:48:29.839642] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f73d1970000 00:15:18.114 [2024-07-25 18:48:29.840780] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.114 [2024-07-25 18:48:29.854743] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:18.114 [2024-07-25 18:48:29.854772] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:18.114 [2024-07-25 18:48:29.859886] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:18.114 [2024-07-25 18:48:29.859937] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:18.114 [2024-07-25 18:48:29.860017] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:18.114 [2024-07-25 18:48:29.860055] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:18.114 [2024-07-25 18:48:29.860074] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:18.114 [2024-07-25 18:48:29.860887] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:18.114 [2024-07-25 18:48:29.860913] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:18.114 [2024-07-25 18:48:29.860926] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:18.114 [2024-07-25 18:48:29.861893] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:18.114 [2024-07-25 18:48:29.861913] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:18.114 [2024-07-25 18:48:29.861926] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:18.114 [2024-07-25 18:48:29.862901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:18.114 [2024-07-25 18:48:29.862925] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:18.114 [2024-07-25 18:48:29.863912] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:18.114 [2024-07-25 18:48:29.863931] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:18.114 [2024-07-25 18:48:29.863941] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:18.114 [2024-07-25 18:48:29.863952] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:18.114 [2024-07-25 18:48:29.864065] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:18.114 [2024-07-25 18:48:29.864075] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:18.114 [2024-07-25 18:48:29.864084] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:18.114 [2024-07-25 18:48:29.864917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:18.114 [2024-07-25 18:48:29.865922] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:18.114 [2024-07-25 18:48:29.866926] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:18.114 [2024-07-25 18:48:29.867926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.114 [2024-07-25 18:48:29.868006] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:18.114 [2024-07-25 18:48:29.868947] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:18.114 [2024-07-25 18:48:29.868966] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:18.114 [2024-07-25 18:48:29.868976] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:18.114 [2024-07-25 18:48:29.868999] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:18.114 [2024-07-25 18:48:29.869016] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:18.114 [2024-07-25 18:48:29.869040] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.114 [2024-07-25 18:48:29.869071] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.114 [2024-07-25 18:48:29.869091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.877075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.877102] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:18.115 [2024-07-25 18:48:29.877114] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:18.115 [2024-07-25 18:48:29.877122] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:18.115 [2024-07-25 18:48:29.877134] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:18.115 [2024-07-25 18:48:29.877143] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:18.115 [2024-07-25 18:48:29.877151] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:18.115 [2024-07-25 18:48:29.877159] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.877172] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.877188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.885069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.885095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.115 [2024-07-25 18:48:29.885109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.115 [2024-07-25 18:48:29.885122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.115 [2024-07-25 18:48:29.885134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.115 [2024-07-25 18:48:29.885143] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.885160] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.885176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.893071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.893089] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:18.115 [2024-07-25 18:48:29.893098] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.893110] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.893124] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.893139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.901087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.901160] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.901178] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.901191] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:18.115 [2024-07-25 18:48:29.901203] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:18.115 [2024-07-25 18:48:29.901214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.909070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.909092] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:18.115 [2024-07-25 18:48:29.909108] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.909122] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.909135] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.115 [2024-07-25 18:48:29.909143] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.115 [2024-07-25 18:48:29.909153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.917068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.917098] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.917115] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.917128] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.115 [2024-07-25 18:48:29.917137] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.115 [2024-07-25 18:48:29.917147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.925072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.925093] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.925105] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.925119] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.925130] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.925138] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.925147] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:18.115 [2024-07-25 18:48:29.925155] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:18.115 [2024-07-25 18:48:29.925163] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:18.115 [2024-07-25 18:48:29.925192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.933071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.933102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.941072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.941097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.949071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.949098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.957073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.957100] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:18.115 [2024-07-25 18:48:29.957110] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:18.115 [2024-07-25 18:48:29.957117] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:18.115 [2024-07-25 18:48:29.957124] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:18.115 [2024-07-25 18:48:29.957134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:18.115 [2024-07-25 18:48:29.957146] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:18.115 [2024-07-25 18:48:29.957154] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:18.115 [2024-07-25 18:48:29.957164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.957175] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:18.115 [2024-07-25 18:48:29.957184] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.115 [2024-07-25 18:48:29.957193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.957205] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:18.115 [2024-07-25 18:48:29.957213] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:18.115 [2024-07-25 18:48:29.957223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:18.115 [2024-07-25 18:48:29.965071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.965099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.965116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:18.115 [2024-07-25 18:48:29.965131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:18.115 ===================================================== 00:15:18.115 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:18.115 ===================================================== 00:15:18.115 Controller Capabilities/Features 00:15:18.115 ================================ 00:15:18.115 Vendor ID: 4e58 00:15:18.115 Subsystem Vendor ID: 4e58 00:15:18.116 Serial Number: SPDK2 00:15:18.116 Model Number: SPDK bdev Controller 00:15:18.116 Firmware Version: 24.05.1 00:15:18.116 Recommended Arb Burst: 6 00:15:18.116 IEEE OUI Identifier: 8d 6b 50 00:15:18.116 Multi-path I/O 00:15:18.116 May have multiple subsystem ports: Yes 00:15:18.116 May have multiple controllers: Yes 00:15:18.116 Associated with SR-IOV VF: No 00:15:18.116 Max Data Transfer Size: 131072 00:15:18.116 Max Number of Namespaces: 32 00:15:18.116 Max Number of I/O Queues: 127 00:15:18.116 NVMe Specification Version (VS): 1.3 00:15:18.116 NVMe Specification Version (Identify): 1.3 00:15:18.116 Maximum Queue Entries: 256 00:15:18.116 Contiguous Queues Required: Yes 00:15:18.116 Arbitration Mechanisms Supported 00:15:18.116 Weighted Round Robin: Not Supported 00:15:18.116 Vendor Specific: Not Supported 00:15:18.116 Reset Timeout: 15000 ms 00:15:18.116 Doorbell Stride: 4 bytes 00:15:18.116 NVM Subsystem Reset: Not Supported 00:15:18.116 Command Sets Supported 00:15:18.116 NVM Command Set: Supported 00:15:18.116 Boot Partition: Not Supported 00:15:18.116 Memory Page Size Minimum: 4096 bytes 00:15:18.116 Memory Page Size Maximum: 4096 bytes 00:15:18.116 Persistent Memory Region: Not Supported 00:15:18.116 Optional Asynchronous Events Supported 00:15:18.116 Namespace Attribute Notices: Supported 00:15:18.116 Firmware Activation Notices: Not Supported 00:15:18.116 ANA Change Notices: Not Supported 00:15:18.116 PLE Aggregate Log Change Notices: Not Supported 00:15:18.116 LBA Status Info Alert Notices: Not Supported 00:15:18.116 EGE Aggregate Log Change Notices: Not Supported 00:15:18.116 Normal NVM Subsystem Shutdown event: Not Supported 00:15:18.116 Zone Descriptor Change Notices: Not Supported 00:15:18.116 Discovery Log Change Notices: Not Supported 00:15:18.116 Controller Attributes 00:15:18.116 128-bit Host Identifier: Supported 00:15:18.116 Non-Operational Permissive Mode: Not Supported 00:15:18.116 NVM Sets: Not Supported 00:15:18.116 Read Recovery Levels: Not Supported 00:15:18.116 Endurance Groups: Not Supported 00:15:18.116 Predictable Latency Mode: Not Supported 00:15:18.116 Traffic Based Keep ALive: Not Supported 00:15:18.116 Namespace Granularity: Not Supported 00:15:18.116 SQ Associations: Not Supported 00:15:18.116 UUID List: Not Supported 00:15:18.116 Multi-Domain Subsystem: Not Supported 00:15:18.116 Fixed Capacity Management: Not Supported 00:15:18.116 Variable Capacity Management: Not Supported 00:15:18.116 Delete Endurance Group: Not Supported 00:15:18.116 Delete NVM Set: Not Supported 00:15:18.116 Extended LBA Formats Supported: Not Supported 00:15:18.116 Flexible Data Placement Supported: Not Supported 00:15:18.116 00:15:18.116 Controller Memory Buffer Support 00:15:18.116 ================================ 00:15:18.116 Supported: No 00:15:18.116 00:15:18.116 Persistent Memory Region Support 00:15:18.116 ================================ 00:15:18.116 Supported: No 00:15:18.116 00:15:18.116 Admin Command Set Attributes 00:15:18.116 ============================ 00:15:18.116 Security Send/Receive: Not Supported 00:15:18.116 Format NVM: Not Supported 00:15:18.116 Firmware Activate/Download: Not Supported 00:15:18.116 Namespace Management: Not Supported 00:15:18.116 Device Self-Test: Not Supported 00:15:18.116 Directives: Not Supported 00:15:18.116 NVMe-MI: Not Supported 00:15:18.116 Virtualization Management: Not Supported 00:15:18.116 Doorbell Buffer Config: Not Supported 00:15:18.116 Get LBA Status Capability: Not Supported 00:15:18.116 Command & Feature Lockdown Capability: Not Supported 00:15:18.116 Abort Command Limit: 4 00:15:18.116 Async Event Request Limit: 4 00:15:18.116 Number of Firmware Slots: N/A 00:15:18.116 Firmware Slot 1 Read-Only: N/A 00:15:18.116 Firmware Activation Without Reset: N/A 00:15:18.116 Multiple Update Detection Support: N/A 00:15:18.116 Firmware Update Granularity: No Information Provided 00:15:18.116 Per-Namespace SMART Log: No 00:15:18.116 Asymmetric Namespace Access Log Page: Not Supported 00:15:18.116 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:18.116 Command Effects Log Page: Supported 00:15:18.116 Get Log Page Extended Data: Supported 00:15:18.116 Telemetry Log Pages: Not Supported 00:15:18.116 Persistent Event Log Pages: Not Supported 00:15:18.116 Supported Log Pages Log Page: May Support 00:15:18.116 Commands Supported & Effects Log Page: Not Supported 00:15:18.116 Feature Identifiers & Effects Log Page:May Support 00:15:18.116 NVMe-MI Commands & Effects Log Page: May Support 00:15:18.116 Data Area 4 for Telemetry Log: Not Supported 00:15:18.116 Error Log Page Entries Supported: 128 00:15:18.116 Keep Alive: Supported 00:15:18.116 Keep Alive Granularity: 10000 ms 00:15:18.116 00:15:18.116 NVM Command Set Attributes 00:15:18.116 ========================== 00:15:18.116 Submission Queue Entry Size 00:15:18.116 Max: 64 00:15:18.116 Min: 64 00:15:18.116 Completion Queue Entry Size 00:15:18.116 Max: 16 00:15:18.116 Min: 16 00:15:18.116 Number of Namespaces: 32 00:15:18.116 Compare Command: Supported 00:15:18.116 Write Uncorrectable Command: Not Supported 00:15:18.116 Dataset Management Command: Supported 00:15:18.116 Write Zeroes Command: Supported 00:15:18.116 Set Features Save Field: Not Supported 00:15:18.116 Reservations: Not Supported 00:15:18.116 Timestamp: Not Supported 00:15:18.116 Copy: Supported 00:15:18.116 Volatile Write Cache: Present 00:15:18.116 Atomic Write Unit (Normal): 1 00:15:18.116 Atomic Write Unit (PFail): 1 00:15:18.116 Atomic Compare & Write Unit: 1 00:15:18.116 Fused Compare & Write: Supported 00:15:18.116 Scatter-Gather List 00:15:18.116 SGL Command Set: Supported (Dword aligned) 00:15:18.116 SGL Keyed: Not Supported 00:15:18.116 SGL Bit Bucket Descriptor: Not Supported 00:15:18.116 SGL Metadata Pointer: Not Supported 00:15:18.116 Oversized SGL: Not Supported 00:15:18.116 SGL Metadata Address: Not Supported 00:15:18.116 SGL Offset: Not Supported 00:15:18.116 Transport SGL Data Block: Not Supported 00:15:18.116 Replay Protected Memory Block: Not Supported 00:15:18.116 00:15:18.116 Firmware Slot Information 00:15:18.116 ========================= 00:15:18.116 Active slot: 1 00:15:18.116 Slot 1 Firmware Revision: 24.05.1 00:15:18.116 00:15:18.116 00:15:18.116 Commands Supported and Effects 00:15:18.116 ============================== 00:15:18.116 Admin Commands 00:15:18.116 -------------- 00:15:18.116 Get Log Page (02h): Supported 00:15:18.116 Identify (06h): Supported 00:15:18.116 Abort (08h): Supported 00:15:18.116 Set Features (09h): Supported 00:15:18.116 Get Features (0Ah): Supported 00:15:18.116 Asynchronous Event Request (0Ch): Supported 00:15:18.116 Keep Alive (18h): Supported 00:15:18.116 I/O Commands 00:15:18.116 ------------ 00:15:18.116 Flush (00h): Supported LBA-Change 00:15:18.116 Write (01h): Supported LBA-Change 00:15:18.116 Read (02h): Supported 00:15:18.116 Compare (05h): Supported 00:15:18.116 Write Zeroes (08h): Supported LBA-Change 00:15:18.116 Dataset Management (09h): Supported LBA-Change 00:15:18.116 Copy (19h): Supported LBA-Change 00:15:18.116 Unknown (79h): Supported LBA-Change 00:15:18.116 Unknown (7Ah): Supported 00:15:18.116 00:15:18.116 Error Log 00:15:18.116 ========= 00:15:18.116 00:15:18.116 Arbitration 00:15:18.116 =========== 00:15:18.116 Arbitration Burst: 1 00:15:18.116 00:15:18.116 Power Management 00:15:18.116 ================ 00:15:18.116 Number of Power States: 1 00:15:18.116 Current Power State: Power State #0 00:15:18.116 Power State #0: 00:15:18.116 Max Power: 0.00 W 00:15:18.116 Non-Operational State: Operational 00:15:18.116 Entry Latency: Not Reported 00:15:18.116 Exit Latency: Not Reported 00:15:18.116 Relative Read Throughput: 0 00:15:18.116 Relative Read Latency: 0 00:15:18.116 Relative Write Throughput: 0 00:15:18.116 Relative Write Latency: 0 00:15:18.116 Idle Power: Not Reported 00:15:18.116 Active Power: Not Reported 00:15:18.116 Non-Operational Permissive Mode: Not Supported 00:15:18.116 00:15:18.116 Health Information 00:15:18.116 ================== 00:15:18.116 Critical Warnings: 00:15:18.116 Available Spare Space: OK 00:15:18.116 Temperature: OK 00:15:18.116 Device Reliability: OK 00:15:18.116 Read Only: No 00:15:18.116 Volatile Memory Backup: OK 00:15:18.116 Current Temperature: 0 Kelvin[2024-07-25 18:48:29.965266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:18.116 [2024-07-25 18:48:29.973071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:18.117 [2024-07-25 18:48:29.973145] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:18.117 [2024-07-25 18:48:29.973164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.117 [2024-07-25 18:48:29.973180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.117 [2024-07-25 18:48:29.973192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.117 [2024-07-25 18:48:29.973202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.117 [2024-07-25 18:48:29.973290] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:18.117 [2024-07-25 18:48:29.973312] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:18.117 [2024-07-25 18:48:29.974294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.117 [2024-07-25 18:48:29.974378] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:18.117 [2024-07-25 18:48:29.974393] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:18.117 [2024-07-25 18:48:29.975299] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:18.117 [2024-07-25 18:48:29.975323] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:18.117 [2024-07-25 18:48:29.975389] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:18.117 [2024-07-25 18:48:29.976618] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.374 (-273 Celsius) 00:15:18.374 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:18.375 Available Spare: 0% 00:15:18.375 Available Spare Threshold: 0% 00:15:18.375 Life Percentage Used: 0% 00:15:18.375 Data Units Read: 0 00:15:18.375 Data Units Written: 0 00:15:18.375 Host Read Commands: 0 00:15:18.375 Host Write Commands: 0 00:15:18.375 Controller Busy Time: 0 minutes 00:15:18.375 Power Cycles: 0 00:15:18.375 Power On Hours: 0 hours 00:15:18.375 Unsafe Shutdowns: 0 00:15:18.375 Unrecoverable Media Errors: 0 00:15:18.375 Lifetime Error Log Entries: 0 00:15:18.375 Warning Temperature Time: 0 minutes 00:15:18.375 Critical Temperature Time: 0 minutes 00:15:18.375 00:15:18.375 Number of Queues 00:15:18.375 ================ 00:15:18.375 Number of I/O Submission Queues: 127 00:15:18.375 Number of I/O Completion Queues: 127 00:15:18.375 00:15:18.375 Active Namespaces 00:15:18.375 ================= 00:15:18.375 Namespace ID:1 00:15:18.375 Error Recovery Timeout: Unlimited 00:15:18.375 Command Set Identifier: NVM (00h) 00:15:18.375 Deallocate: Supported 00:15:18.375 Deallocated/Unwritten Error: Not Supported 00:15:18.375 Deallocated Read Value: Unknown 00:15:18.375 Deallocate in Write Zeroes: Not Supported 00:15:18.375 Deallocated Guard Field: 0xFFFF 00:15:18.375 Flush: Supported 00:15:18.375 Reservation: Supported 00:15:18.375 Namespace Sharing Capabilities: Multiple Controllers 00:15:18.375 Size (in LBAs): 131072 (0GiB) 00:15:18.375 Capacity (in LBAs): 131072 (0GiB) 00:15:18.375 Utilization (in LBAs): 131072 (0GiB) 00:15:18.375 NGUID: 2FA2DD1EC4BC4228A90E97A849C4EC9F 00:15:18.375 UUID: 2fa2dd1e-c4bc-4228-a90e-97a849c4ec9f 00:15:18.375 Thin Provisioning: Not Supported 00:15:18.375 Per-NS Atomic Units: Yes 00:15:18.375 Atomic Boundary Size (Normal): 0 00:15:18.375 Atomic Boundary Size (PFail): 0 00:15:18.375 Atomic Boundary Offset: 0 00:15:18.375 Maximum Single Source Range Length: 65535 00:15:18.375 Maximum Copy Length: 65535 00:15:18.375 Maximum Source Range Count: 1 00:15:18.375 NGUID/EUI64 Never Reused: No 00:15:18.375 Namespace Write Protected: No 00:15:18.375 Number of LBA Formats: 1 00:15:18.375 Current LBA Format: LBA Format #00 00:15:18.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:18.375 00:15:18.375 18:48:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:18.375 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.375 [2024-07-25 18:48:30.204910] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.636 Initializing NVMe Controllers 00:15:23.636 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.636 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:23.636 Initialization complete. Launching workers. 00:15:23.636 ======================================================== 00:15:23.636 Latency(us) 00:15:23.636 Device Information : IOPS MiB/s Average min max 00:15:23.636 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36289.16 141.75 3526.54 1156.86 7544.78 00:15:23.636 ======================================================== 00:15:23.636 Total : 36289.16 141.75 3526.54 1156.86 7544.78 00:15:23.636 00:15:23.636 [2024-07-25 18:48:35.311420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.636 18:48:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:23.636 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.893 [2024-07-25 18:48:35.544100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.154 Initializing NVMe Controllers 00:15:29.154 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.154 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:29.154 Initialization complete. Launching workers. 00:15:29.154 ======================================================== 00:15:29.154 Latency(us) 00:15:29.154 Device Information : IOPS MiB/s Average min max 00:15:29.154 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33799.46 132.03 3786.17 1181.90 7695.04 00:15:29.154 ======================================================== 00:15:29.154 Total : 33799.46 132.03 3786.17 1181.90 7695.04 00:15:29.154 00:15:29.154 [2024-07-25 18:48:40.568915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.154 18:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:29.154 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.154 [2024-07-25 18:48:40.776781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.426 [2024-07-25 18:48:45.920231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.426 Initializing NVMe Controllers 00:15:34.426 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.426 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.426 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:34.426 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:34.426 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:34.426 Initialization complete. Launching workers. 00:15:34.426 Starting thread on core 2 00:15:34.426 Starting thread on core 3 00:15:34.426 Starting thread on core 1 00:15:34.426 18:48:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:34.426 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.426 [2024-07-25 18:48:46.220017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.706 [2024-07-25 18:48:49.300425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.706 Initializing NVMe Controllers 00:15:37.706 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.706 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.706 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:37.706 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:37.706 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:37.706 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:37.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:37.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:37.707 Initialization complete. Launching workers. 00:15:37.707 Starting thread on core 1 with urgent priority queue 00:15:37.707 Starting thread on core 2 with urgent priority queue 00:15:37.707 Starting thread on core 3 with urgent priority queue 00:15:37.707 Starting thread on core 0 with urgent priority queue 00:15:37.707 SPDK bdev Controller (SPDK2 ) core 0: 4954.00 IO/s 20.19 secs/100000 ios 00:15:37.707 SPDK bdev Controller (SPDK2 ) core 1: 5452.67 IO/s 18.34 secs/100000 ios 00:15:37.707 SPDK bdev Controller (SPDK2 ) core 2: 5391.67 IO/s 18.55 secs/100000 ios 00:15:37.707 SPDK bdev Controller (SPDK2 ) core 3: 5590.33 IO/s 17.89 secs/100000 ios 00:15:37.707 ======================================================== 00:15:37.707 00:15:37.707 18:48:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:37.707 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.963 [2024-07-25 18:48:49.593609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.963 Initializing NVMe Controllers 00:15:37.963 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.963 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.963 Namespace ID: 1 size: 0GB 00:15:37.963 Initialization complete. 00:15:37.963 INFO: using host memory buffer for IO 00:15:37.963 Hello world! 00:15:37.963 [2024-07-25 18:48:49.606693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.963 18:48:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:37.963 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.220 [2024-07-25 18:48:49.878870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.152 Initializing NVMe Controllers 00:15:39.152 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.152 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.152 Initialization complete. Launching workers. 00:15:39.152 submit (in ns) avg, min, max = 6339.5, 3505.6, 4014618.9 00:15:39.152 complete (in ns) avg, min, max = 24656.3, 2061.1, 4018743.3 00:15:39.152 00:15:39.152 Submit histogram 00:15:39.152 ================ 00:15:39.152 Range in us Cumulative Count 00:15:39.152 3.484 - 3.508: 0.0223% ( 3) 00:15:39.152 3.508 - 3.532: 0.4097% ( 52) 00:15:39.152 3.532 - 3.556: 1.3854% ( 131) 00:15:39.152 3.556 - 3.579: 3.9997% ( 351) 00:15:39.152 3.579 - 3.603: 8.7740% ( 641) 00:15:39.152 3.603 - 3.627: 17.1160% ( 1120) 00:15:39.152 3.627 - 3.650: 27.1488% ( 1347) 00:15:39.152 3.650 - 3.674: 37.6508% ( 1410) 00:15:39.152 3.674 - 3.698: 45.2182% ( 1016) 00:15:39.152 3.698 - 3.721: 51.4301% ( 834) 00:15:39.152 3.721 - 3.745: 55.1542% ( 500) 00:15:39.152 3.745 - 3.769: 59.2284% ( 547) 00:15:39.152 3.769 - 3.793: 62.5652% ( 448) 00:15:39.152 3.793 - 3.816: 65.7605% ( 429) 00:15:39.152 3.816 - 3.840: 68.6057% ( 382) 00:15:39.152 3.840 - 3.864: 72.6724% ( 546) 00:15:39.152 3.864 - 3.887: 76.9179% ( 570) 00:15:39.152 3.887 - 3.911: 80.2994% ( 454) 00:15:39.152 3.911 - 3.935: 83.3979% ( 416) 00:15:39.152 3.935 - 3.959: 85.1408% ( 234) 00:15:39.153 3.959 - 3.982: 86.4293% ( 173) 00:15:39.153 3.982 - 4.006: 88.0530% ( 218) 00:15:39.153 4.006 - 4.030: 89.0585% ( 135) 00:15:39.153 4.030 - 4.053: 89.8704% ( 109) 00:15:39.153 4.053 - 4.077: 90.7791% ( 122) 00:15:39.153 4.077 - 4.101: 91.6207% ( 113) 00:15:39.153 4.101 - 4.124: 92.3954% ( 104) 00:15:39.153 4.124 - 4.148: 92.9316% ( 72) 00:15:39.153 4.148 - 4.172: 93.2966% ( 49) 00:15:39.153 4.172 - 4.196: 93.5871% ( 39) 00:15:39.153 4.196 - 4.219: 93.7658% ( 24) 00:15:39.153 4.219 - 4.243: 93.9818% ( 29) 00:15:39.153 4.243 - 4.267: 94.1233% ( 19) 00:15:39.153 4.267 - 4.290: 94.2723% ( 20) 00:15:39.153 4.290 - 4.314: 94.4511% ( 24) 00:15:39.153 4.314 - 4.338: 94.6447% ( 26) 00:15:39.153 4.338 - 4.361: 94.7490% ( 14) 00:15:39.153 4.361 - 4.385: 94.8235% ( 10) 00:15:39.153 4.385 - 4.409: 94.8980% ( 10) 00:15:39.153 4.409 - 4.433: 94.9948% ( 13) 00:15:39.153 4.433 - 4.456: 95.0469% ( 7) 00:15:39.153 4.456 - 4.480: 95.1214% ( 10) 00:15:39.153 4.480 - 4.504: 95.2033% ( 11) 00:15:39.153 4.504 - 4.527: 95.2927% ( 12) 00:15:39.153 4.527 - 4.551: 95.3523% ( 8) 00:15:39.153 4.551 - 4.575: 95.4119% ( 8) 00:15:39.153 4.575 - 4.599: 95.4342% ( 3) 00:15:39.153 4.599 - 4.622: 95.4789% ( 6) 00:15:39.153 4.622 - 4.646: 95.5311% ( 7) 00:15:39.153 4.646 - 4.670: 95.5609% ( 4) 00:15:39.153 4.670 - 4.693: 95.6055% ( 6) 00:15:39.153 4.693 - 4.717: 95.6502% ( 6) 00:15:39.153 4.717 - 4.741: 95.7024% ( 7) 00:15:39.153 4.741 - 4.764: 95.7769% ( 10) 00:15:39.153 4.764 - 4.788: 95.8513% ( 10) 00:15:39.153 4.788 - 4.812: 95.9109% ( 8) 00:15:39.153 4.812 - 4.836: 95.9928% ( 11) 00:15:39.153 4.836 - 4.859: 96.0822% ( 12) 00:15:39.153 4.859 - 4.883: 96.1046% ( 3) 00:15:39.153 4.883 - 4.907: 96.2163% ( 15) 00:15:39.153 4.907 - 4.930: 96.3131% ( 13) 00:15:39.153 4.930 - 4.954: 96.4993% ( 25) 00:15:39.153 4.954 - 4.978: 96.5962% ( 13) 00:15:39.153 4.978 - 5.001: 96.7377% ( 19) 00:15:39.153 5.001 - 5.025: 96.8568% ( 16) 00:15:39.153 5.025 - 5.049: 96.9984% ( 19) 00:15:39.153 5.049 - 5.073: 97.0952% ( 13) 00:15:39.153 5.073 - 5.096: 97.1548% ( 8) 00:15:39.153 5.096 - 5.120: 97.2293% ( 10) 00:15:39.153 5.120 - 5.144: 97.3112% ( 11) 00:15:39.153 5.144 - 5.167: 97.3782% ( 9) 00:15:39.153 5.167 - 5.191: 97.4378% ( 8) 00:15:39.153 5.191 - 5.215: 97.4825% ( 6) 00:15:39.153 5.215 - 5.239: 97.5644% ( 11) 00:15:39.153 5.239 - 5.262: 97.6017% ( 5) 00:15:39.153 5.262 - 5.286: 97.6687% ( 9) 00:15:39.153 5.286 - 5.310: 97.7134% ( 6) 00:15:39.153 5.310 - 5.333: 97.7506% ( 5) 00:15:39.153 5.333 - 5.357: 97.7879% ( 5) 00:15:39.153 5.357 - 5.381: 97.8177% ( 4) 00:15:39.153 5.381 - 5.404: 97.8326% ( 2) 00:15:39.153 5.404 - 5.428: 97.8549% ( 3) 00:15:39.153 5.428 - 5.452: 97.8773% ( 3) 00:15:39.153 5.452 - 5.476: 97.8996% ( 3) 00:15:39.153 5.476 - 5.499: 97.9145% ( 2) 00:15:39.153 5.499 - 5.523: 97.9368% ( 3) 00:15:39.153 5.523 - 5.547: 97.9666% ( 4) 00:15:39.153 5.547 - 5.570: 97.9741% ( 1) 00:15:39.153 5.570 - 5.594: 97.9815% ( 1) 00:15:39.153 5.594 - 5.618: 98.0262% ( 6) 00:15:39.153 5.618 - 5.641: 98.0337% ( 1) 00:15:39.153 5.665 - 5.689: 98.0411% ( 1) 00:15:39.153 5.713 - 5.736: 98.0486% ( 1) 00:15:39.153 5.736 - 5.760: 98.0560% ( 1) 00:15:39.153 5.760 - 5.784: 98.0635% ( 1) 00:15:39.153 5.831 - 5.855: 98.0784% ( 2) 00:15:39.153 5.855 - 5.879: 98.0858% ( 1) 00:15:39.153 5.879 - 5.902: 98.0933% ( 1) 00:15:39.153 5.973 - 5.997: 98.1007% ( 1) 00:15:39.153 6.021 - 6.044: 98.1081% ( 1) 00:15:39.153 6.068 - 6.116: 98.1379% ( 4) 00:15:39.153 6.116 - 6.163: 98.1454% ( 1) 00:15:39.153 6.163 - 6.210: 98.1603% ( 2) 00:15:39.153 6.210 - 6.258: 98.1752% ( 2) 00:15:39.153 6.258 - 6.305: 98.1901% ( 2) 00:15:39.153 6.353 - 6.400: 98.2050% ( 2) 00:15:39.153 6.400 - 6.447: 98.2273% ( 3) 00:15:39.153 6.447 - 6.495: 98.2422% ( 2) 00:15:39.153 6.495 - 6.542: 98.2497% ( 1) 00:15:39.153 6.542 - 6.590: 98.2646% ( 2) 00:15:39.153 6.637 - 6.684: 98.2720% ( 1) 00:15:39.153 6.732 - 6.779: 98.2795% ( 1) 00:15:39.153 6.779 - 6.827: 98.2944% ( 2) 00:15:39.153 7.016 - 7.064: 98.3093% ( 2) 00:15:39.153 7.396 - 7.443: 98.3241% ( 2) 00:15:39.153 7.585 - 7.633: 98.3390% ( 2) 00:15:39.153 7.633 - 7.680: 98.3465% ( 1) 00:15:39.153 7.680 - 7.727: 98.3688% ( 3) 00:15:39.153 7.727 - 7.775: 98.3763% ( 1) 00:15:39.153 7.775 - 7.822: 98.3837% ( 1) 00:15:39.153 7.822 - 7.870: 98.3912% ( 1) 00:15:39.153 7.917 - 7.964: 98.4135% ( 3) 00:15:39.153 8.012 - 8.059: 98.4284% ( 2) 00:15:39.153 8.059 - 8.107: 98.4433% ( 2) 00:15:39.153 8.154 - 8.201: 98.4508% ( 1) 00:15:39.153 8.201 - 8.249: 98.4657% ( 2) 00:15:39.153 8.249 - 8.296: 98.4731% ( 1) 00:15:39.153 8.296 - 8.344: 98.4880% ( 2) 00:15:39.153 8.391 - 8.439: 98.4955% ( 1) 00:15:39.153 8.581 - 8.628: 98.5104% ( 2) 00:15:39.153 8.628 - 8.676: 98.5252% ( 2) 00:15:39.153 8.676 - 8.723: 98.5476% ( 3) 00:15:39.153 8.770 - 8.818: 98.5550% ( 1) 00:15:39.153 8.818 - 8.865: 98.5625% ( 1) 00:15:39.153 8.865 - 8.913: 98.5774% ( 2) 00:15:39.153 8.960 - 9.007: 98.5848% ( 1) 00:15:39.153 9.102 - 9.150: 98.5923% ( 1) 00:15:39.153 9.244 - 9.292: 98.5997% ( 1) 00:15:39.153 9.387 - 9.434: 98.6072% ( 1) 00:15:39.153 9.434 - 9.481: 98.6146% ( 1) 00:15:39.153 9.529 - 9.576: 98.6295% ( 2) 00:15:39.153 9.624 - 9.671: 98.6370% ( 1) 00:15:39.153 9.813 - 9.861: 98.6593% ( 3) 00:15:39.153 9.956 - 10.003: 98.6668% ( 1) 00:15:39.153 10.050 - 10.098: 98.6742% ( 1) 00:15:39.153 10.098 - 10.145: 98.6817% ( 1) 00:15:39.153 10.145 - 10.193: 98.6891% ( 1) 00:15:39.153 10.193 - 10.240: 98.6966% ( 1) 00:15:39.153 10.287 - 10.335: 98.7040% ( 1) 00:15:39.153 10.430 - 10.477: 98.7115% ( 1) 00:15:39.153 10.667 - 10.714: 98.7189% ( 1) 00:15:39.153 10.951 - 10.999: 98.7264% ( 1) 00:15:39.153 11.046 - 11.093: 98.7338% ( 1) 00:15:39.153 11.330 - 11.378: 98.7412% ( 1) 00:15:39.153 11.615 - 11.662: 98.7561% ( 2) 00:15:39.153 11.662 - 11.710: 98.7636% ( 1) 00:15:39.153 11.710 - 11.757: 98.7710% ( 1) 00:15:39.153 11.804 - 11.852: 98.7785% ( 1) 00:15:39.153 11.947 - 11.994: 98.7859% ( 1) 00:15:39.153 12.041 - 12.089: 98.7934% ( 1) 00:15:39.153 12.231 - 12.326: 98.8083% ( 2) 00:15:39.153 12.326 - 12.421: 98.8157% ( 1) 00:15:39.153 12.421 - 12.516: 98.8232% ( 1) 00:15:39.153 12.610 - 12.705: 98.8306% ( 1) 00:15:39.153 12.990 - 13.084: 98.8381% ( 1) 00:15:39.153 13.084 - 13.179: 98.8455% ( 1) 00:15:39.153 13.369 - 13.464: 98.8530% ( 1) 00:15:39.153 13.559 - 13.653: 98.8679% ( 2) 00:15:39.153 13.653 - 13.748: 98.8753% ( 1) 00:15:39.153 13.748 - 13.843: 98.8902% ( 2) 00:15:39.153 13.843 - 13.938: 98.8977% ( 1) 00:15:39.153 14.127 - 14.222: 98.9200% ( 3) 00:15:39.153 14.317 - 14.412: 98.9275% ( 1) 00:15:39.153 14.412 - 14.507: 98.9349% ( 1) 00:15:39.153 14.601 - 14.696: 98.9498% ( 2) 00:15:39.153 14.791 - 14.886: 98.9572% ( 1) 00:15:39.153 17.161 - 17.256: 98.9870% ( 4) 00:15:39.153 17.256 - 17.351: 99.0168% ( 4) 00:15:39.153 17.351 - 17.446: 99.0317% ( 2) 00:15:39.153 17.446 - 17.541: 99.0690% ( 5) 00:15:39.153 17.541 - 17.636: 99.1062% ( 5) 00:15:39.153 17.636 - 17.730: 99.1732% ( 9) 00:15:39.153 17.730 - 17.825: 99.1956% ( 3) 00:15:39.153 17.825 - 17.920: 99.2328% ( 5) 00:15:39.153 17.920 - 18.015: 99.2850% ( 7) 00:15:39.153 18.015 - 18.110: 99.3371% ( 7) 00:15:39.153 18.110 - 18.204: 99.4339% ( 13) 00:15:39.153 18.204 - 18.299: 99.5382% ( 14) 00:15:39.153 18.299 - 18.394: 99.6201% ( 11) 00:15:39.153 18.394 - 18.489: 99.7095% ( 12) 00:15:39.153 18.489 - 18.584: 99.7393% ( 4) 00:15:39.153 18.584 - 18.679: 99.7468% ( 1) 00:15:39.153 18.679 - 18.773: 99.7914% ( 6) 00:15:39.153 18.773 - 18.868: 99.8138% ( 3) 00:15:39.153 18.868 - 18.963: 99.8212% ( 1) 00:15:39.153 18.963 - 19.058: 99.8361% ( 2) 00:15:39.153 19.058 - 19.153: 99.8510% ( 2) 00:15:39.153 19.153 - 19.247: 99.8659% ( 2) 00:15:39.153 19.342 - 19.437: 99.8734% ( 1) 00:15:39.153 19.437 - 19.532: 99.8808% ( 1) 00:15:39.153 20.006 - 20.101: 99.8883% ( 1) 00:15:39.153 20.859 - 20.954: 99.8957% ( 1) 00:15:39.153 22.566 - 22.661: 99.9032% ( 1) 00:15:39.153 22.850 - 22.945: 99.9106% ( 1) 00:15:39.154 24.652 - 24.841: 99.9181% ( 1) 00:15:39.154 25.600 - 25.790: 99.9255% ( 1) 00:15:39.154 26.927 - 27.117: 99.9330% ( 1) 00:15:39.154 29.203 - 29.393: 99.9404% ( 1) 00:15:39.154 3568.071 - 3592.344: 99.9479% ( 1) 00:15:39.154 3980.705 - 4004.978: 99.9926% ( 6) 00:15:39.154 4004.978 - 4029.250: 100.0000% ( 1) 00:15:39.154 00:15:39.154 Complete histogram 00:15:39.154 ================== 00:15:39.154 Range in us Cumulative Count 00:15:39.154 2.050 - 2.062: 0.0074% ( 1) 00:15:39.154 2.062 - 2.074: 8.9826% ( 1205) 00:15:39.154 2.074 - 2.086: 33.9863% ( 3357) 00:15:39.154 2.086 - 2.098: 36.7496% ( 371) 00:15:39.154 2.098 - 2.110: 46.1716% ( 1265) 00:15:39.154 2.110 - 2.121: 57.5823% ( 1532) 00:15:39.154 2.121 - 2.133: 59.3773% ( 241) 00:15:39.154 2.133 - 2.145: 66.3191% ( 932) 00:15:39.154 2.145 - 2.157: 72.3224% ( 806) 00:15:39.154 2.157 - 2.169: 73.3800% ( 142) 00:15:39.154 2.169 - 2.181: 76.5827% ( 430) 00:15:39.154 2.181 - 2.193: 79.4056% ( 379) 00:15:39.154 2.193 - 2.204: 80.1579% ( 101) 00:15:39.154 2.204 - 2.216: 82.8542% ( 362) 00:15:39.154 2.216 - 2.228: 85.7813% ( 393) 00:15:39.154 2.228 - 2.240: 87.7253% ( 261) 00:15:39.154 2.240 - 2.252: 89.7438% ( 271) 00:15:39.154 2.252 - 2.264: 91.3600% ( 217) 00:15:39.154 2.264 - 2.276: 91.6580% ( 40) 00:15:39.154 2.276 - 2.287: 92.0378% ( 51) 00:15:39.154 2.287 - 2.299: 92.4624% ( 57) 00:15:39.154 2.299 - 2.311: 92.9987% ( 72) 00:15:39.154 2.311 - 2.323: 93.3413% ( 46) 00:15:39.154 2.323 - 2.335: 93.4604% ( 16) 00:15:39.154 2.335 - 2.347: 93.5051% ( 6) 00:15:39.154 2.347 - 2.359: 93.5647% ( 8) 00:15:39.154 2.359 - 2.370: 93.6467% ( 11) 00:15:39.154 2.370 - 2.382: 93.7509% ( 14) 00:15:39.154 2.382 - 2.394: 93.9371% ( 25) 00:15:39.154 2.394 - 2.406: 94.1978% ( 35) 00:15:39.154 2.406 - 2.418: 94.4511% ( 34) 00:15:39.154 2.418 - 2.430: 94.7043% ( 34) 00:15:39.154 2.430 - 2.441: 94.9799% ( 37) 00:15:39.154 2.441 - 2.453: 95.2108% ( 31) 00:15:39.154 2.453 - 2.465: 95.3970% ( 25) 00:15:39.154 2.465 - 2.477: 95.6651% ( 36) 00:15:39.154 2.477 - 2.489: 95.7917% ( 17) 00:15:39.154 2.489 - 2.501: 95.9109% ( 16) 00:15:39.154 2.501 - 2.513: 96.0226% ( 15) 00:15:39.154 2.513 - 2.524: 96.1716% ( 20) 00:15:39.154 2.524 - 2.536: 96.2833% ( 15) 00:15:39.154 2.536 - 2.548: 96.3876% ( 14) 00:15:39.154 2.548 - 2.560: 96.4248% ( 5) 00:15:39.154 2.560 - 2.572: 96.4844% ( 8) 00:15:39.154 2.572 - 2.584: 96.5589% ( 10) 00:15:39.154 2.584 - 2.596: 96.6185% ( 8) 00:15:39.154 2.596 - 2.607: 96.6557% ( 5) 00:15:39.154 2.607 - 2.619: 96.7377% ( 11) 00:15:39.154 2.619 - 2.631: 96.7600% ( 3) 00:15:39.154 2.631 - 2.643: 96.7973% ( 5) 00:15:39.154 2.643 - 2.655: 96.8494% ( 7) 00:15:39.154 2.655 - 2.667: 96.8941% ( 6) 00:15:39.154 2.667 - 2.679: 96.9015% ( 1) 00:15:39.154 2.679 - 2.690: 96.9611% ( 8) 00:15:39.154 2.702 - 2.714: 97.0431% ( 11) 00:15:39.154 2.714 - 2.726: 97.1175% ( 10) 00:15:39.154 2.726 - 2.738: 97.1697% ( 7) 00:15:39.154 2.738 - 2.750: 97.1995% ( 4) 00:15:39.154 2.750 - 2.761: 97.2590% ( 8) 00:15:39.154 2.761 - 2.773: 97.3112% ( 7) 00:15:39.154 2.773 - 2.785: 97.3484% ( 5) 00:15:39.154 2.785 - 2.797: 97.3782% ( 4) 00:15:39.154 2.797 - 2.809: 97.4229% ( 6) 00:15:39.154 2.809 - 2.821: 97.4899% ( 9) 00:15:39.154 2.821 - 2.833: 97.5421% ( 7) 00:15:39.154 2.833 - 2.844: 97.5868% ( 6) 00:15:39.154 2.844 - 2.856: 97.6166% ( 4) 00:15:39.154 2.856 - 2.868: 97.6687% ( 7) 00:15:39.154 2.868 - 2.880: 97.7059% ( 5) 00:15:39.154 2.880 - 2.892: 97.7581% ( 7) 00:15:39.154 2.892 - 2.904: 97.8028% ( 6) 00:15:39.154 2.904 - 2.916: 97.8549% ( 7) 00:15:39.154 2.916 - 2.927: 97.8921% ( 5) 00:15:39.154 2.927 - 2.939: 97.9294% ( 5) 00:15:39.154 2.939 - 2.951: 97.9368% ( 1) 00:15:39.154 2.951 - 2.963: 97.9443% ( 1) 00:15:39.154 2.963 - 2.975: 97.9666% ( 3) 00:15:39.154 2.975 - 2.987: 97.9890% ( 3) 00:15:39.154 2.987 - 2.999: 98.0113% ( 3) 00:15:39.154 2.999 - 3.010: 98.0262% ( 2) 00:15:39.154 3.010 - 3.022: 98.0709% ( 6) 00:15:39.154 3.022 - 3.034: 98.0933% ( 3) 00:15:39.154 3.034 - 3.058: 98.1305% ( 5) 00:15:39.154 3.058 - 3.081: 98.1752% ( 6) 00:15:39.154 3.081 - 3.105: 98.1826% ( 1) 00:15:39.154 3.105 - 3.129: 98.2199% ( 5) 00:15:39.154 3.129 - 3.153: 98.2348% ( 2) 00:15:39.154 3.153 - 3.176: 98.2497% ( 2) 00:15:39.154 3.176 - 3.200: 98.2795% ( 4) 00:15:39.154 3.200 - 3.224: 98.3093% ( 4) 00:15:39.154 3.224 - 3.247: 98.3316% ( 3) 00:15:39.154 3.247 - 3.271: 98.3465% ( 2) 00:15:39.154 3.271 - 3.295: 98.3912% ( 6) 00:15:39.154 3.295 - 3.319: 98.4061% ( 2) 00:15:39.154 3.319 - 3.342: 98.4359% ( 4) 00:15:39.154 3.366 - 3.390: 98.4657% ( 4) 00:15:39.154 3.390 - 3.413: 98.5029% ( 5) 00:15:39.154 3.413 - 3.437: 98.5178% ( 2) 00:15:39.154 3.437 - 3.461: 98.5327% ( 2) 00:15:39.154 3.461 - 3.484: 98.5401% ( 1) 00:15:39.154 3.508 - 3.532: 98.5550% ( 2) 00:15:39.154 3.532 - 3.556: 98.5625% ( 1) 00:15:39.154 3.556 - 3.579: 98.5699% ( 1) 00:15:39.154 3.579 - 3.603: 98.5997% ( 4) 00:15:39.154 3.603 - 3.627: 98.6295% ( 4) 00:15:39.154 3.627 - 3.650: 98.6668% ( 5) 00:15:39.154 3.650 - 3.674: 98.6817% ( 2) 00:15:39.154 3.674 - 3.698: 98.6891% ( 1) 00:15:39.154 3.698 - 3.721: 98.6966% ( 1) 00:15:39.154 3.745 - 3.769: 98.7040% ( 1) 00:15:39.154 3.793 - 3.816: 98.7189% ( 2) 00:15:39.154 3.887 - 3.911: 98.7264% ( 1) 00:15:39.154 3.911 - 3.935: 98.7338% ( 1) 00:15:39.154 3.935 - 3.959: 98.7412% ( 1) 00:15:39.154 4.006 - 4.030: 98.7487% ( 1) 00:15:39.154 4.219 - 4.243: 98.7561% ( 1) 00:15:39.154 4.290 - 4.314: 98.7636% ( 1) 00:15:39.154 4.361 - 4.385: 98.7710% ( 1) 00:15:39.154 5.404 - 5.428: 98.7785% ( 1) 00:15:39.154 5.594 - 5.618: 98.7859% ( 1) 00:15:39.154 5.689 - 5.713: 98.7934% ( 1) 00:15:39.154 6.258 - 6.305: 98.8008% ( 1) 00:15:39.154 6.305 - 6.353: 98.8083% ( 1) 00:15:39.154 6.400 - 6.447: 98.8157% ( 1) 00:15:39.154 7.016 - 7.064: 98.8306% ( 2) 00:15:39.154 7.253 - 7.301: 98.8381% ( 1) 00:15:39.154 7.727 - 7.775: 98.8455% ( 1) 00:15:39.154 7.775 - 7.822: 98.8530%[2024-07-25 18:48:50.984010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.154 ( 1) 00:15:39.154 7.822 - 7.870: 98.8604% ( 1) 00:15:39.154 8.628 - 8.676: 98.8679% ( 1) 00:15:39.154 8.865 - 8.913: 98.8753% ( 1) 00:15:39.154 10.145 - 10.193: 98.8828% ( 1) 00:15:39.154 12.136 - 12.231: 98.8902% ( 1) 00:15:39.154 15.550 - 15.644: 98.9051% ( 2) 00:15:39.154 15.644 - 15.739: 98.9349% ( 4) 00:15:39.154 15.739 - 15.834: 98.9498% ( 2) 00:15:39.154 15.834 - 15.929: 98.9721% ( 3) 00:15:39.154 15.929 - 16.024: 98.9870% ( 2) 00:15:39.154 16.024 - 16.119: 99.0019% ( 2) 00:15:39.154 16.119 - 16.213: 99.0168% ( 2) 00:15:39.154 16.213 - 16.308: 99.0466% ( 4) 00:15:39.154 16.308 - 16.403: 99.0764% ( 4) 00:15:39.154 16.403 - 16.498: 99.1062% ( 4) 00:15:39.154 16.498 - 16.593: 99.1658% ( 8) 00:15:39.154 16.593 - 16.687: 99.2105% ( 6) 00:15:39.154 16.687 - 16.782: 99.2254% ( 2) 00:15:39.154 16.782 - 16.877: 99.2477% ( 3) 00:15:39.154 16.877 - 16.972: 99.2552% ( 1) 00:15:39.154 16.972 - 17.067: 99.2999% ( 6) 00:15:39.154 17.067 - 17.161: 99.3148% ( 2) 00:15:39.154 17.161 - 17.256: 99.3222% ( 1) 00:15:39.154 17.256 - 17.351: 99.3297% ( 1) 00:15:39.154 17.351 - 17.446: 99.3446% ( 2) 00:15:39.154 17.446 - 17.541: 99.3520% ( 1) 00:15:39.154 17.541 - 17.636: 99.3595% ( 1) 00:15:39.154 17.636 - 17.730: 99.3892% ( 4) 00:15:39.154 18.015 - 18.110: 99.3967% ( 1) 00:15:39.154 18.110 - 18.204: 99.4041% ( 1) 00:15:39.154 18.204 - 18.299: 99.4116% ( 1) 00:15:39.154 18.963 - 19.058: 99.4265% ( 2) 00:15:39.154 22.281 - 22.376: 99.4339% ( 1) 00:15:39.154 825.268 - 831.336: 99.4414% ( 1) 00:15:39.154 3980.705 - 4004.978: 99.8659% ( 57) 00:15:39.154 4004.978 - 4029.250: 100.0000% ( 18) 00:15:39.154 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:39.412 [ 00:15:39.412 { 00:15:39.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:39.412 "subtype": "Discovery", 00:15:39.412 "listen_addresses": [], 00:15:39.412 "allow_any_host": true, 00:15:39.412 "hosts": [] 00:15:39.412 }, 00:15:39.412 { 00:15:39.412 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:39.412 "subtype": "NVMe", 00:15:39.412 "listen_addresses": [ 00:15:39.412 { 00:15:39.412 "trtype": "VFIOUSER", 00:15:39.412 "adrfam": "IPv4", 00:15:39.412 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:39.412 "trsvcid": "0" 00:15:39.412 } 00:15:39.412 ], 00:15:39.412 "allow_any_host": true, 00:15:39.412 "hosts": [], 00:15:39.412 "serial_number": "SPDK1", 00:15:39.412 "model_number": "SPDK bdev Controller", 00:15:39.412 "max_namespaces": 32, 00:15:39.412 "min_cntlid": 1, 00:15:39.412 "max_cntlid": 65519, 00:15:39.412 "namespaces": [ 00:15:39.412 { 00:15:39.412 "nsid": 1, 00:15:39.412 "bdev_name": "Malloc1", 00:15:39.412 "name": "Malloc1", 00:15:39.412 "nguid": "44C1FA3ADC754E4E9C6782BBD68B474D", 00:15:39.412 "uuid": "44c1fa3a-dc75-4e4e-9c67-82bbd68b474d" 00:15:39.412 }, 00:15:39.412 { 00:15:39.412 "nsid": 2, 00:15:39.412 "bdev_name": "Malloc3", 00:15:39.412 "name": "Malloc3", 00:15:39.412 "nguid": "1803A1E0E7CA4D4A844E57ADB08F8897", 00:15:39.412 "uuid": "1803a1e0-e7ca-4d4a-844e-57adb08f8897" 00:15:39.412 } 00:15:39.412 ] 00:15:39.412 }, 00:15:39.412 { 00:15:39.412 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:39.412 "subtype": "NVMe", 00:15:39.412 "listen_addresses": [ 00:15:39.412 { 00:15:39.412 "trtype": "VFIOUSER", 00:15:39.412 "adrfam": "IPv4", 00:15:39.412 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:39.412 "trsvcid": "0" 00:15:39.412 } 00:15:39.412 ], 00:15:39.412 "allow_any_host": true, 00:15:39.412 "hosts": [], 00:15:39.412 "serial_number": "SPDK2", 00:15:39.412 "model_number": "SPDK bdev Controller", 00:15:39.412 "max_namespaces": 32, 00:15:39.412 "min_cntlid": 1, 00:15:39.412 "max_cntlid": 65519, 00:15:39.412 "namespaces": [ 00:15:39.412 { 00:15:39.412 "nsid": 1, 00:15:39.412 "bdev_name": "Malloc2", 00:15:39.412 "name": "Malloc2", 00:15:39.412 "nguid": "2FA2DD1EC4BC4228A90E97A849C4EC9F", 00:15:39.412 "uuid": "2fa2dd1e-c4bc-4228-a90e-97a849c4ec9f" 00:15:39.412 } 00:15:39.412 ] 00:15:39.412 } 00:15:39.412 ] 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3500895 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:39.412 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:39.670 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.670 [2024-07-25 18:48:51.435572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.928 Malloc4 00:15:39.928 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:39.928 [2024-07-25 18:48:51.805327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.186 18:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:40.186 Asynchronous Event Request test 00:15:40.186 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.186 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.186 Registering asynchronous event callbacks... 00:15:40.186 Starting namespace attribute notice tests for all controllers... 00:15:40.186 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:40.186 aer_cb - Changed Namespace 00:15:40.186 Cleaning up... 00:15:40.186 [ 00:15:40.186 { 00:15:40.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.186 "subtype": "Discovery", 00:15:40.186 "listen_addresses": [], 00:15:40.186 "allow_any_host": true, 00:15:40.186 "hosts": [] 00:15:40.186 }, 00:15:40.186 { 00:15:40.186 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:40.186 "subtype": "NVMe", 00:15:40.186 "listen_addresses": [ 00:15:40.186 { 00:15:40.186 "trtype": "VFIOUSER", 00:15:40.186 "adrfam": "IPv4", 00:15:40.186 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:40.186 "trsvcid": "0" 00:15:40.186 } 00:15:40.186 ], 00:15:40.186 "allow_any_host": true, 00:15:40.186 "hosts": [], 00:15:40.186 "serial_number": "SPDK1", 00:15:40.186 "model_number": "SPDK bdev Controller", 00:15:40.186 "max_namespaces": 32, 00:15:40.186 "min_cntlid": 1, 00:15:40.186 "max_cntlid": 65519, 00:15:40.186 "namespaces": [ 00:15:40.186 { 00:15:40.186 "nsid": 1, 00:15:40.186 "bdev_name": "Malloc1", 00:15:40.186 "name": "Malloc1", 00:15:40.186 "nguid": "44C1FA3ADC754E4E9C6782BBD68B474D", 00:15:40.186 "uuid": "44c1fa3a-dc75-4e4e-9c67-82bbd68b474d" 00:15:40.186 }, 00:15:40.186 { 00:15:40.186 "nsid": 2, 00:15:40.186 "bdev_name": "Malloc3", 00:15:40.186 "name": "Malloc3", 00:15:40.186 "nguid": "1803A1E0E7CA4D4A844E57ADB08F8897", 00:15:40.186 "uuid": "1803a1e0-e7ca-4d4a-844e-57adb08f8897" 00:15:40.186 } 00:15:40.186 ] 00:15:40.186 }, 00:15:40.186 { 00:15:40.186 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:40.186 "subtype": "NVMe", 00:15:40.186 "listen_addresses": [ 00:15:40.186 { 00:15:40.186 "trtype": "VFIOUSER", 00:15:40.186 "adrfam": "IPv4", 00:15:40.186 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:40.186 "trsvcid": "0" 00:15:40.186 } 00:15:40.186 ], 00:15:40.186 "allow_any_host": true, 00:15:40.186 "hosts": [], 00:15:40.186 "serial_number": "SPDK2", 00:15:40.186 "model_number": "SPDK bdev Controller", 00:15:40.186 "max_namespaces": 32, 00:15:40.186 "min_cntlid": 1, 00:15:40.186 "max_cntlid": 65519, 00:15:40.186 "namespaces": [ 00:15:40.186 { 00:15:40.186 "nsid": 1, 00:15:40.186 "bdev_name": "Malloc2", 00:15:40.186 "name": "Malloc2", 00:15:40.186 "nguid": "2FA2DD1EC4BC4228A90E97A849C4EC9F", 00:15:40.186 "uuid": "2fa2dd1e-c4bc-4228-a90e-97a849c4ec9f" 00:15:40.186 }, 00:15:40.186 { 00:15:40.186 "nsid": 2, 00:15:40.186 "bdev_name": "Malloc4", 00:15:40.186 "name": "Malloc4", 00:15:40.186 "nguid": "6F98597096D34833982DCDBD618EC842", 00:15:40.186 "uuid": "6f985970-96d3-4833-982d-cdbd618ec842" 00:15:40.186 } 00:15:40.186 ] 00:15:40.186 } 00:15:40.186 ] 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3500895 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3494802 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3494802 ']' 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3494802 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3494802 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3494802' 00:15:40.448 killing process with pid 3494802 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3494802 00:15:40.448 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3494802 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3501033 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3501033' 00:15:40.733 Process pid: 3501033 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3501033 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3501033 ']' 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:40.733 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:40.733 [2024-07-25 18:48:52.484236] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:40.733 [2024-07-25 18:48:52.485287] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:40.733 [2024-07-25 18:48:52.485368] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.733 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.733 [2024-07-25 18:48:52.547399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.991 [2024-07-25 18:48:52.634569] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.991 [2024-07-25 18:48:52.634619] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.991 [2024-07-25 18:48:52.634647] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.991 [2024-07-25 18:48:52.634658] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.991 [2024-07-25 18:48:52.634668] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.991 [2024-07-25 18:48:52.634763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.991 [2024-07-25 18:48:52.638078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.991 [2024-07-25 18:48:52.638149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.991 [2024-07-25 18:48:52.642074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.991 [2024-07-25 18:48:52.738263] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:40.991 [2024-07-25 18:48:52.738459] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:40.991 [2024-07-25 18:48:52.738763] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:40.991 [2024-07-25 18:48:52.739286] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:40.991 [2024-07-25 18:48:52.739519] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:40.991 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:40.991 18:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:40.991 18:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:41.922 18:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:42.180 18:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:42.180 18:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:42.180 18:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.180 18:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:42.438 18:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:42.438 Malloc1 00:15:42.696 18:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:42.696 18:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:42.954 18:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:43.211 18:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.211 18:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:43.211 18:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:43.469 Malloc2 00:15:43.469 18:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:43.726 18:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:43.983 18:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3501033 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3501033 ']' 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3501033 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3501033 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3501033' 00:15:44.241 killing process with pid 3501033 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3501033 00:15:44.241 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3501033 00:15:44.500 18:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:44.500 18:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:44.500 00:15:44.500 real 0m52.476s 00:15:44.500 user 3m27.199s 00:15:44.500 sys 0m4.404s 00:15:44.500 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:44.500 18:48:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:44.500 ************************************ 00:15:44.500 END TEST nvmf_vfio_user 00:15:44.500 ************************************ 00:15:44.758 18:48:56 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:44.758 18:48:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:44.758 18:48:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:44.758 18:48:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:44.758 ************************************ 00:15:44.758 START TEST nvmf_vfio_user_nvme_compliance 00:15:44.758 ************************************ 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:44.758 * Looking for test storage... 00:15:44.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.758 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3501630 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3501630' 00:15:44.759 Process pid: 3501630 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3501630 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3501630 ']' 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:44.759 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.759 [2024-07-25 18:48:56.535706] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:44.759 [2024-07-25 18:48:56.535785] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.759 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.759 [2024-07-25 18:48:56.592819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:45.018 [2024-07-25 18:48:56.676900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.018 [2024-07-25 18:48:56.676956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.018 [2024-07-25 18:48:56.676986] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.018 [2024-07-25 18:48:56.676997] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.018 [2024-07-25 18:48:56.677007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.018 [2024-07-25 18:48:56.677096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.018 [2024-07-25 18:48:56.677163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.018 [2024-07-25 18:48:56.677166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.018 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:45.018 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:45.018 18:48:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.949 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.207 malloc0 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.207 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.208 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.208 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.208 18:48:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:46.208 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.208 00:15:46.208 00:15:46.208 CUnit - A unit testing framework for C - Version 2.1-3 00:15:46.208 http://cunit.sourceforge.net/ 00:15:46.208 00:15:46.208 00:15:46.208 Suite: nvme_compliance 00:15:46.208 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 18:48:58.025627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.208 [2024-07-25 18:48:58.027089] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:46.208 [2024-07-25 18:48:58.027115] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:46.208 [2024-07-25 18:48:58.027128] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:46.208 [2024-07-25 18:48:58.028647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.208 passed 00:15:46.464 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 18:48:58.117276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.464 [2024-07-25 18:48:58.120295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.464 passed 00:15:46.464 Test: admin_identify_ns ...[2024-07-25 18:48:58.206680] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.464 [2024-07-25 18:48:58.266114] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:46.464 [2024-07-25 18:48:58.274080] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:46.464 [2024-07-25 18:48:58.295189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.464 passed 00:15:46.721 Test: admin_get_features_mandatory_features ...[2024-07-25 18:48:58.381976] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.721 [2024-07-25 18:48:58.384998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.721 passed 00:15:46.721 Test: admin_get_features_optional_features ...[2024-07-25 18:48:58.470531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.721 [2024-07-25 18:48:58.473547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.721 passed 00:15:46.721 Test: admin_set_features_number_of_queues ...[2024-07-25 18:48:58.557547] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.978 [2024-07-25 18:48:58.663186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.978 passed 00:15:46.978 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 18:48:58.746454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.978 [2024-07-25 18:48:58.749474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.978 passed 00:15:46.978 Test: admin_get_log_page_with_lpo ...[2024-07-25 18:48:58.835581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.235 [2024-07-25 18:48:58.903109] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:47.235 [2024-07-25 18:48:58.916158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.235 passed 00:15:47.235 Test: fabric_property_get ...[2024-07-25 18:48:58.999462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.235 [2024-07-25 18:48:59.000725] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:47.235 [2024-07-25 18:48:59.002483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.235 passed 00:15:47.235 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 18:48:59.087995] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.235 [2024-07-25 18:48:59.089313] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:47.235 [2024-07-25 18:48:59.091028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.493 passed 00:15:47.493 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 18:48:59.176157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.493 [2024-07-25 18:48:59.262069] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:47.493 [2024-07-25 18:48:59.278066] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:47.493 [2024-07-25 18:48:59.283182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.493 passed 00:15:47.493 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 18:48:59.365703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.493 [2024-07-25 18:48:59.366998] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:47.493 [2024-07-25 18:48:59.368733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.750 passed 00:15:47.750 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 18:48:59.450814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.750 [2024-07-25 18:48:59.526071] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:47.750 [2024-07-25 18:48:59.550085] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:47.750 [2024-07-25 18:48:59.555155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.750 passed 00:15:48.007 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 18:48:59.637399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.007 [2024-07-25 18:48:59.638676] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:48.007 [2024-07-25 18:48:59.638730] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:48.007 [2024-07-25 18:48:59.640436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.007 passed 00:15:48.007 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 18:48:59.725571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.007 [2024-07-25 18:48:59.821072] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:48.007 [2024-07-25 18:48:59.829067] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:48.007 [2024-07-25 18:48:59.837066] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:48.007 [2024-07-25 18:48:59.845068] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:48.007 [2024-07-25 18:48:59.874184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.264 passed 00:15:48.264 Test: admin_create_io_sq_verify_pc ...[2024-07-25 18:48:59.954169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.264 [2024-07-25 18:48:59.972081] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:48.264 [2024-07-25 18:48:59.989532] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.264 passed 00:15:48.264 Test: admin_create_io_qp_max_qps ...[2024-07-25 18:49:00.075241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.632 [2024-07-25 18:49:01.188076] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:49.889 [2024-07-25 18:49:01.585743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.889 passed 00:15:49.889 Test: admin_create_io_sq_shared_cq ...[2024-07-25 18:49:01.671857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.146 [2024-07-25 18:49:01.804073] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:50.146 [2024-07-25 18:49:01.841148] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.146 passed 00:15:50.146 00:15:50.146 Run Summary: Type Total Ran Passed Failed Inactive 00:15:50.146 suites 1 1 n/a 0 0 00:15:50.146 tests 18 18 18 0 0 00:15:50.146 asserts 360 360 360 0 n/a 00:15:50.146 00:15:50.146 Elapsed time = 1.588 seconds 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3501630 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3501630 ']' 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3501630 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3501630 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3501630' 00:15:50.146 killing process with pid 3501630 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3501630 00:15:50.146 18:49:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3501630 00:15:50.404 18:49:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:50.404 18:49:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:50.404 00:15:50.404 real 0m5.758s 00:15:50.404 user 0m16.179s 00:15:50.404 sys 0m0.543s 00:15:50.404 18:49:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:50.404 18:49:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.404 ************************************ 00:15:50.404 END TEST nvmf_vfio_user_nvme_compliance 00:15:50.404 ************************************ 00:15:50.404 18:49:02 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:50.404 18:49:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:50.404 18:49:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:50.404 18:49:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.404 ************************************ 00:15:50.404 START TEST nvmf_vfio_user_fuzz 00:15:50.404 ************************************ 00:15:50.404 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:50.404 * Looking for test storage... 00:15:50.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.404 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.404 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:50.662 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3502347 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3502347' 00:15:50.663 Process pid: 3502347 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3502347 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3502347 ']' 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:50.663 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.921 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:50.921 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:50.921 18:49:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.854 malloc0 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:51.854 18:49:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:23.964 Fuzzing completed. Shutting down the fuzz application 00:16:23.964 00:16:23.964 Dumping successful admin opcodes: 00:16:23.964 8, 9, 10, 24, 00:16:23.964 Dumping successful io opcodes: 00:16:23.964 0, 00:16:23.964 NS: 0x200003a1ef00 I/O qp, Total commands completed: 614817, total successful commands: 2374, random_seed: 1402343552 00:16:23.964 NS: 0x200003a1ef00 admin qp, Total commands completed: 78352, total successful commands: 605, random_seed: 3071305152 00:16:23.964 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:23.964 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.964 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.964 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3502347 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3502347 ']' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3502347 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3502347 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3502347' 00:16:23.965 killing process with pid 3502347 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3502347 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3502347 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:23.965 00:16:23.965 real 0m32.203s 00:16:23.965 user 0m31.546s 00:16:23.965 sys 0m30.126s 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:23.965 18:49:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.965 ************************************ 00:16:23.965 END TEST nvmf_vfio_user_fuzz 00:16:23.965 ************************************ 00:16:23.965 18:49:34 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:23.965 18:49:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:23.965 18:49:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:23.965 18:49:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.965 ************************************ 00:16:23.965 START TEST nvmf_host_management 00:16:23.965 ************************************ 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:23.965 * Looking for test storage... 00:16:23.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.965 18:49:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:24.901 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:24.901 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:24.901 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:24.901 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:24.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:24.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:16:24.902 00:16:24.902 --- 10.0.0.2 ping statistics --- 00:16:24.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.902 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:16:24.902 00:16:24.902 --- 10.0.0.1 ping statistics --- 00:16:24.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.902 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3507764 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3507764 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3507764 ']' 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:24.902 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.902 [2024-07-25 18:49:36.654172] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:24.902 [2024-07-25 18:49:36.654246] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.902 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.902 [2024-07-25 18:49:36.720113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.160 [2024-07-25 18:49:36.813786] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.160 [2024-07-25 18:49:36.813845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.160 [2024-07-25 18:49:36.813870] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.160 [2024-07-25 18:49:36.813883] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.160 [2024-07-25 18:49:36.813895] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.160 [2024-07-25 18:49:36.813993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.160 [2024-07-25 18:49:36.814085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.160 [2024-07-25 18:49:36.814166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:25.160 [2024-07-25 18:49:36.814169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.160 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.161 [2024-07-25 18:49:36.974887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.161 18:49:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.161 Malloc0 00:16:25.161 [2024-07-25 18:49:37.034768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.418 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.418 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3507834 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3507834 /var/tmp/bdevperf.sock 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3507834 ']' 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:25.419 { 00:16:25.419 "params": { 00:16:25.419 "name": "Nvme$subsystem", 00:16:25.419 "trtype": "$TEST_TRANSPORT", 00:16:25.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.419 "adrfam": "ipv4", 00:16:25.419 "trsvcid": "$NVMF_PORT", 00:16:25.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.419 "hdgst": ${hdgst:-false}, 00:16:25.419 "ddgst": ${ddgst:-false} 00:16:25.419 }, 00:16:25.419 "method": "bdev_nvme_attach_controller" 00:16:25.419 } 00:16:25.419 EOF 00:16:25.419 )") 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:25.419 18:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:25.419 "params": { 00:16:25.419 "name": "Nvme0", 00:16:25.419 "trtype": "tcp", 00:16:25.419 "traddr": "10.0.0.2", 00:16:25.419 "adrfam": "ipv4", 00:16:25.419 "trsvcid": "4420", 00:16:25.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:25.419 "hdgst": false, 00:16:25.419 "ddgst": false 00:16:25.419 }, 00:16:25.419 "method": "bdev_nvme_attach_controller" 00:16:25.419 }' 00:16:25.419 [2024-07-25 18:49:37.111780] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:25.419 [2024-07-25 18:49:37.111869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507834 ] 00:16:25.419 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.419 [2024-07-25 18:49:37.173258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.419 [2024-07-25 18:49:37.259792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.984 Running I/O for 10 seconds... 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:25.984 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.243 [2024-07-25 18:49:37.949752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.949993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.950005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.950017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.950029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.950042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de8980 is same with the state(5) to be set 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.243 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:26.243 [2024-07-25 18:49:37.959527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.243 [2024-07-25 18:49:37.959570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.959588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.243 [2024-07-25 18:49:37.959601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.959615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.243 [2024-07-25 18:49:37.959629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.959643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.243 [2024-07-25 18:49:37.959657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.959670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1895f00 is same with the state(5) to be set 00:16:26.243 [2024-07-25 18:49:37.959957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.243 [2024-07-25 18:49:37.959982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.960008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.243 [2024-07-25 18:49:37.960025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.960042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.243 [2024-07-25 18:49:37.960057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.960090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.243 [2024-07-25 18:49:37.960108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.960124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.243 [2024-07-25 18:49:37.960139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.960155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.243 [2024-07-25 18:49:37.960172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.243 [2024-07-25 18:49:37.960188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.960978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.960993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 18:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.244 [2024-07-25 18:49:37.961948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.961967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.961983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.962000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.962016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.244 [2024-07-25 18:49:37.962032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.244 [2024-07-25 18:49:37.962047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.245 [2024-07-25 18:49:37.962073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.245 [2024-07-25 18:49:37.962091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.245 18:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:26.245 [2024-07-25 18:49:37.962110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.245 [2024-07-25 18:49:37.962125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.245 [2024-07-25 18:49:37.962142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.245 [2024-07-25 18:49:37.962158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.245 [2024-07-25 18:49:37.962242] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1890330 was disconnected and freed. reset controller. 00:16:26.245 [2024-07-25 18:49:37.963403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:26.245 task offset: 81920 on job bdev=Nvme0n1 fails 00:16:26.245 00:16:26.245 Latency(us) 00:16:26.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.245 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.245 Job: Nvme0n1 ended in about 0.40 seconds with error 00:16:26.245 Verification LBA range: start 0x0 length 0x400 00:16:26.245 Nvme0n1 : 0.40 1583.40 98.96 158.34 0.00 35692.74 3070.48 33593.27 00:16:26.245 =================================================================================================================== 00:16:26.245 Total : 1583.40 98.96 158.34 0.00 35692.74 3070.48 33593.27 00:16:26.245 [2024-07-25 18:49:37.965256] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:26.245 [2024-07-25 18:49:37.965286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1895f00 (9): Bad file descriptor 00:16:26.245 [2024-07-25 18:49:38.016235] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:27.174 18:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3507834 00:16:27.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3507834) - No such process 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:27.175 { 00:16:27.175 "params": { 00:16:27.175 "name": "Nvme$subsystem", 00:16:27.175 "trtype": "$TEST_TRANSPORT", 00:16:27.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.175 "adrfam": "ipv4", 00:16:27.175 "trsvcid": "$NVMF_PORT", 00:16:27.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.175 "hdgst": ${hdgst:-false}, 00:16:27.175 "ddgst": ${ddgst:-false} 00:16:27.175 }, 00:16:27.175 "method": "bdev_nvme_attach_controller" 00:16:27.175 } 00:16:27.175 EOF 00:16:27.175 )") 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:27.175 18:49:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:27.175 "params": { 00:16:27.175 "name": "Nvme0", 00:16:27.175 "trtype": "tcp", 00:16:27.175 "traddr": "10.0.0.2", 00:16:27.175 "adrfam": "ipv4", 00:16:27.175 "trsvcid": "4420", 00:16:27.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:27.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:27.175 "hdgst": false, 00:16:27.175 "ddgst": false 00:16:27.175 }, 00:16:27.175 "method": "bdev_nvme_attach_controller" 00:16:27.175 }' 00:16:27.175 [2024-07-25 18:49:39.005561] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:27.175 [2024-07-25 18:49:39.005650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508111 ] 00:16:27.175 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.432 [2024-07-25 18:49:39.066182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.432 [2024-07-25 18:49:39.150995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.690 Running I/O for 1 seconds... 00:16:28.622 00:16:28.622 Latency(us) 00:16:28.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.622 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:28.622 Verification LBA range: start 0x0 length 0x400 00:16:28.622 Nvme0n1 : 1.03 1670.32 104.39 0.00 0.00 37701.54 7378.87 32622.36 00:16:28.622 =================================================================================================================== 00:16:28.622 Total : 1670.32 104.39 0.00 0.00 37701.54 7378.87 32622.36 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.880 rmmod nvme_tcp 00:16:28.880 rmmod nvme_fabrics 00:16:28.880 rmmod nvme_keyring 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3507764 ']' 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3507764 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3507764 ']' 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3507764 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3507764 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3507764' 00:16:28.880 killing process with pid 3507764 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3507764 00:16:28.880 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3507764 00:16:29.139 [2024-07-25 18:49:40.874005] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:29.139 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.139 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:29.139 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:29.139 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.139 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.139 18:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.139 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.139 18:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.672 18:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.672 18:49:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:31.672 00:16:31.672 real 0m8.469s 00:16:31.672 user 0m18.849s 00:16:31.672 sys 0m2.662s 00:16:31.672 18:49:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:31.672 18:49:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.672 ************************************ 00:16:31.672 END TEST nvmf_host_management 00:16:31.672 ************************************ 00:16:31.672 18:49:42 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:31.672 18:49:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:31.672 18:49:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:31.672 18:49:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:31.672 ************************************ 00:16:31.672 START TEST nvmf_lvol 00:16:31.672 ************************************ 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:31.672 * Looking for test storage... 00:16:31.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.672 18:49:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:33.575 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:33.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:33.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:33.575 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:33.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:33.576 00:16:33.576 --- 10.0.0.2 ping statistics --- 00:16:33.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.576 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:16:33.576 00:16:33.576 --- 10.0.0.1 ping statistics --- 00:16:33.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.576 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3510194 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3510194 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3510194 ']' 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:33.576 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:33.576 [2024-07-25 18:49:45.309866] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:33.576 [2024-07-25 18:49:45.309939] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.576 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.576 [2024-07-25 18:49:45.376109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.834 [2024-07-25 18:49:45.466413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.834 [2024-07-25 18:49:45.466470] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.834 [2024-07-25 18:49:45.466495] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.834 [2024-07-25 18:49:45.466509] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.834 [2024-07-25 18:49:45.466521] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.834 [2024-07-25 18:49:45.466584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.834 [2024-07-25 18:49:45.466652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.834 [2024-07-25 18:49:45.466654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.834 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:33.834 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:33.834 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.834 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.834 18:49:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:33.834 18:49:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.834 18:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:34.092 [2024-07-25 18:49:45.825993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.092 18:49:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.350 18:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:34.350 18:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.607 18:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:34.607 18:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:34.865 18:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:35.123 18:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=99731347-b80e-4f07-9ba4-160aab1f66b9 00:16:35.123 18:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99731347-b80e-4f07-9ba4-160aab1f66b9 lvol 20 00:16:35.381 18:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c3ac3e28-88f8-4d72-a218-97f93ce21c34 00:16:35.381 18:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:35.639 18:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c3ac3e28-88f8-4d72-a218-97f93ce21c34 00:16:35.897 18:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:36.154 [2024-07-25 18:49:47.884221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.154 18:49:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:36.412 18:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3510614 00:16:36.412 18:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:36.412 18:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:36.412 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.354 18:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c3ac3e28-88f8-4d72-a218-97f93ce21c34 MY_SNAPSHOT 00:16:37.917 18:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c4fef2f0-e9fa-4a97-8e15-6b1e2b7b7c6d 00:16:37.917 18:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c3ac3e28-88f8-4d72-a218-97f93ce21c34 30 00:16:38.174 18:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c4fef2f0-e9fa-4a97-8e15-6b1e2b7b7c6d MY_CLONE 00:16:38.431 18:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cb121667-ea25-449f-b407-037e11263eb1 00:16:38.431 18:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cb121667-ea25-449f-b407-037e11263eb1 00:16:38.996 18:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3510614 00:16:47.128 Initializing NVMe Controllers 00:16:47.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:47.128 Controller IO queue size 128, less than required. 00:16:47.128 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:47.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:47.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:47.128 Initialization complete. Launching workers. 00:16:47.128 ======================================================== 00:16:47.128 Latency(us) 00:16:47.128 Device Information : IOPS MiB/s Average min max 00:16:47.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10245.90 40.02 12492.82 1793.30 78856.47 00:16:47.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10488.80 40.97 12207.39 2289.66 155063.85 00:16:47.128 ======================================================== 00:16:47.128 Total : 20734.70 80.99 12348.43 1793.30 155063.85 00:16:47.128 00:16:47.128 18:49:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:47.128 18:49:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c3ac3e28-88f8-4d72-a218-97f93ce21c34 00:16:47.386 18:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99731347-b80e-4f07-9ba4-160aab1f66b9 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.644 rmmod nvme_tcp 00:16:47.644 rmmod nvme_fabrics 00:16:47.644 rmmod nvme_keyring 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3510194 ']' 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3510194 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3510194 ']' 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3510194 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3510194 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3510194' 00:16:47.644 killing process with pid 3510194 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3510194 00:16:47.644 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3510194 00:16:47.903 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.903 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.903 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.903 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.903 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.903 18:49:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.903 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.903 18:49:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.435 18:50:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.435 00:16:50.436 real 0m18.795s 00:16:50.436 user 1m3.836s 00:16:50.436 sys 0m5.790s 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:50.436 ************************************ 00:16:50.436 END TEST nvmf_lvol 00:16:50.436 ************************************ 00:16:50.436 18:50:01 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:50.436 18:50:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:50.436 18:50:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:50.436 18:50:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.436 ************************************ 00:16:50.436 START TEST nvmf_lvs_grow 00:16:50.436 ************************************ 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:50.436 * Looking for test storage... 00:16:50.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.436 18:50:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:52.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:52.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.333 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:52.334 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:52.334 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:16:52.334 00:16:52.334 --- 10.0.0.2 ping statistics --- 00:16:52.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.334 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:16:52.334 00:16:52.334 --- 10.0.0.1 ping statistics --- 00:16:52.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.334 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3513865 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3513865 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3513865 ']' 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.334 18:50:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.334 [2024-07-25 18:50:03.983836] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:52.334 [2024-07-25 18:50:03.983930] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.334 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.334 [2024-07-25 18:50:04.052430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.334 [2024-07-25 18:50:04.141754] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.334 [2024-07-25 18:50:04.141801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.334 [2024-07-25 18:50:04.141821] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.334 [2024-07-25 18:50:04.141832] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.334 [2024-07-25 18:50:04.141842] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.334 [2024-07-25 18:50:04.141869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.591 18:50:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:52.591 18:50:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:52.591 18:50:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.591 18:50:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.591 18:50:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.591 18:50:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.591 18:50:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:52.885 [2024-07-25 18:50:04.508937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.885 ************************************ 00:16:52.885 START TEST lvs_grow_clean 00:16:52.885 ************************************ 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.885 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:53.142 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:53.142 18:50:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:53.399 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f469680a-1637-4146-8434-8cd9857e5476 00:16:53.399 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:16:53.399 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:53.657 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:53.657 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:53.657 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f469680a-1637-4146-8434-8cd9857e5476 lvol 150 00:16:53.914 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b722051-2950-4719-b507-69bf7081aaa8 00:16:53.914 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:53.914 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:54.172 [2024-07-25 18:50:05.897449] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:54.172 [2024-07-25 18:50:05.897539] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:54.172 true 00:16:54.172 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:16:54.172 18:50:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:54.429 18:50:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:54.429 18:50:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:54.687 18:50:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b722051-2950-4719-b507-69bf7081aaa8 00:16:54.945 18:50:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:55.202 [2024-07-25 18:50:06.932627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.202 18:50:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3514305 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3514305 /var/tmp/bdevperf.sock 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3514305 ']' 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:55.460 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:55.460 [2024-07-25 18:50:07.250228] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:55.460 [2024-07-25 18:50:07.250300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514305 ] 00:16:55.460 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.460 [2024-07-25 18:50:07.310485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.718 [2024-07-25 18:50:07.400867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.718 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:55.718 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:55.718 18:50:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:56.283 Nvme0n1 00:16:56.283 18:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:56.540 [ 00:16:56.540 { 00:16:56.540 "name": "Nvme0n1", 00:16:56.540 "aliases": [ 00:16:56.540 "3b722051-2950-4719-b507-69bf7081aaa8" 00:16:56.540 ], 00:16:56.540 "product_name": "NVMe disk", 00:16:56.540 "block_size": 4096, 00:16:56.540 "num_blocks": 38912, 00:16:56.540 "uuid": "3b722051-2950-4719-b507-69bf7081aaa8", 00:16:56.540 "assigned_rate_limits": { 00:16:56.540 "rw_ios_per_sec": 0, 00:16:56.540 "rw_mbytes_per_sec": 0, 00:16:56.540 "r_mbytes_per_sec": 0, 00:16:56.540 "w_mbytes_per_sec": 0 00:16:56.540 }, 00:16:56.540 "claimed": false, 00:16:56.540 "zoned": false, 00:16:56.540 "supported_io_types": { 00:16:56.540 "read": true, 00:16:56.540 "write": true, 00:16:56.540 "unmap": true, 00:16:56.540 "write_zeroes": true, 00:16:56.540 "flush": true, 00:16:56.540 "reset": true, 00:16:56.540 "compare": true, 00:16:56.540 "compare_and_write": true, 00:16:56.540 "abort": true, 00:16:56.540 "nvme_admin": true, 00:16:56.540 "nvme_io": true 00:16:56.540 }, 00:16:56.540 "memory_domains": [ 00:16:56.540 { 00:16:56.540 "dma_device_id": "system", 00:16:56.540 "dma_device_type": 1 00:16:56.540 } 00:16:56.540 ], 00:16:56.540 "driver_specific": { 00:16:56.540 "nvme": [ 00:16:56.540 { 00:16:56.540 "trid": { 00:16:56.540 "trtype": "TCP", 00:16:56.540 "adrfam": "IPv4", 00:16:56.540 "traddr": "10.0.0.2", 00:16:56.540 "trsvcid": "4420", 00:16:56.540 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:56.540 }, 00:16:56.540 "ctrlr_data": { 00:16:56.540 "cntlid": 1, 00:16:56.540 "vendor_id": "0x8086", 00:16:56.540 "model_number": "SPDK bdev Controller", 00:16:56.540 "serial_number": "SPDK0", 00:16:56.540 "firmware_revision": "24.05.1", 00:16:56.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:56.540 "oacs": { 00:16:56.540 "security": 0, 00:16:56.540 "format": 0, 00:16:56.540 "firmware": 0, 00:16:56.540 "ns_manage": 0 00:16:56.540 }, 00:16:56.540 "multi_ctrlr": true, 00:16:56.540 "ana_reporting": false 00:16:56.540 }, 00:16:56.540 "vs": { 00:16:56.540 "nvme_version": "1.3" 00:16:56.540 }, 00:16:56.540 "ns_data": { 00:16:56.540 "id": 1, 00:16:56.540 "can_share": true 00:16:56.540 } 00:16:56.540 } 00:16:56.540 ], 00:16:56.540 "mp_policy": "active_passive" 00:16:56.540 } 00:16:56.540 } 00:16:56.540 ] 00:16:56.540 18:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3514440 00:16:56.540 18:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:56.540 18:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:56.540 Running I/O for 10 seconds... 00:16:57.912 Latency(us) 00:16:57.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.912 Nvme0n1 : 1.00 14352.00 56.06 0.00 0.00 0.00 0.00 0.00 00:16:57.912 =================================================================================================================== 00:16:57.912 Total : 14352.00 56.06 0.00 0.00 0.00 0.00 0.00 00:16:57.912 00:16:58.478 18:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f469680a-1637-4146-8434-8cd9857e5476 00:16:58.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.737 Nvme0n1 : 2.00 14765.00 57.68 0.00 0.00 0.00 0.00 0.00 00:16:58.737 =================================================================================================================== 00:16:58.737 Total : 14765.00 57.68 0.00 0.00 0.00 0.00 0.00 00:16:58.737 00:16:58.737 true 00:16:58.737 18:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:16:58.737 18:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:58.995 18:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:58.995 18:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:58.995 18:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3514440 00:16:59.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.561 Nvme0n1 : 3.00 14838.67 57.96 0.00 0.00 0.00 0.00 0.00 00:16:59.561 =================================================================================================================== 00:16:59.561 Total : 14838.67 57.96 0.00 0.00 0.00 0.00 0.00 00:16:59.561 00:17:00.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.936 Nvme0n1 : 4.00 14970.75 58.48 0.00 0.00 0.00 0.00 0.00 00:17:00.936 =================================================================================================================== 00:17:00.936 Total : 14970.75 58.48 0.00 0.00 0.00 0.00 0.00 00:17:00.936 00:17:01.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.871 Nvme0n1 : 5.00 15100.80 58.99 0.00 0.00 0.00 0.00 0.00 00:17:01.871 =================================================================================================================== 00:17:01.871 Total : 15100.80 58.99 0.00 0.00 0.00 0.00 0.00 00:17:01.871 00:17:02.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.806 Nvme0n1 : 6.00 15102.83 59.00 0.00 0.00 0.00 0.00 0.00 00:17:02.806 =================================================================================================================== 00:17:02.806 Total : 15102.83 59.00 0.00 0.00 0.00 0.00 0.00 00:17:02.806 00:17:03.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.741 Nvme0n1 : 7.00 15068.00 58.86 0.00 0.00 0.00 0.00 0.00 00:17:03.741 =================================================================================================================== 00:17:03.741 Total : 15068.00 58.86 0.00 0.00 0.00 0.00 0.00 00:17:03.741 00:17:04.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.713 Nvme0n1 : 8.00 15073.62 58.88 0.00 0.00 0.00 0.00 0.00 00:17:04.713 =================================================================================================================== 00:17:04.713 Total : 15073.62 58.88 0.00 0.00 0.00 0.00 0.00 00:17:04.713 00:17:05.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.648 Nvme0n1 : 9.00 15078.00 58.90 0.00 0.00 0.00 0.00 0.00 00:17:05.648 =================================================================================================================== 00:17:05.648 Total : 15078.00 58.90 0.00 0.00 0.00 0.00 0.00 00:17:05.648 00:17:06.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.582 Nvme0n1 : 10.00 15075.60 58.89 0.00 0.00 0.00 0.00 0.00 00:17:06.582 =================================================================================================================== 00:17:06.582 Total : 15075.60 58.89 0.00 0.00 0.00 0.00 0.00 00:17:06.582 00:17:06.582 00:17:06.582 Latency(us) 00:17:06.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.582 Nvme0n1 : 10.01 15076.78 58.89 0.00 0.00 8485.23 5170.06 18932.62 00:17:06.582 =================================================================================================================== 00:17:06.582 Total : 15076.78 58.89 0.00 0.00 8485.23 5170.06 18932.62 00:17:06.582 0 00:17:06.582 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3514305 00:17:06.582 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3514305 ']' 00:17:06.582 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3514305 00:17:06.582 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:06.582 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:06.582 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3514305 00:17:06.841 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:06.841 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:06.841 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3514305' 00:17:06.841 killing process with pid 3514305 00:17:06.841 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3514305 00:17:06.841 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.841 00:17:06.841 Latency(us) 00:17:06.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.841 =================================================================================================================== 00:17:06.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:06.841 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3514305 00:17:06.841 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:07.406 18:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:07.664 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:17:07.664 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:07.664 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:07.664 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:07.664 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:07.923 [2024-07-25 18:50:19.766862] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:08.181 18:50:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:17:08.181 request: 00:17:08.181 { 00:17:08.181 "uuid": "f469680a-1637-4146-8434-8cd9857e5476", 00:17:08.181 "method": "bdev_lvol_get_lvstores", 00:17:08.181 "req_id": 1 00:17:08.181 } 00:17:08.181 Got JSON-RPC error response 00:17:08.181 response: 00:17:08.181 { 00:17:08.181 "code": -19, 00:17:08.181 "message": "No such device" 00:17:08.181 } 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:08.439 aio_bdev 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b722051-2950-4719-b507-69bf7081aaa8 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=3b722051-2950-4719-b507-69bf7081aaa8 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:08.439 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:08.697 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b722051-2950-4719-b507-69bf7081aaa8 -t 2000 00:17:08.956 [ 00:17:08.956 { 00:17:08.956 "name": "3b722051-2950-4719-b507-69bf7081aaa8", 00:17:08.956 "aliases": [ 00:17:08.956 "lvs/lvol" 00:17:08.956 ], 00:17:08.956 "product_name": "Logical Volume", 00:17:08.956 "block_size": 4096, 00:17:08.956 "num_blocks": 38912, 00:17:08.956 "uuid": "3b722051-2950-4719-b507-69bf7081aaa8", 00:17:08.956 "assigned_rate_limits": { 00:17:08.956 "rw_ios_per_sec": 0, 00:17:08.956 "rw_mbytes_per_sec": 0, 00:17:08.956 "r_mbytes_per_sec": 0, 00:17:08.956 "w_mbytes_per_sec": 0 00:17:08.956 }, 00:17:08.956 "claimed": false, 00:17:08.956 "zoned": false, 00:17:08.956 "supported_io_types": { 00:17:08.956 "read": true, 00:17:08.956 "write": true, 00:17:08.956 "unmap": true, 00:17:08.956 "write_zeroes": true, 00:17:08.956 "flush": false, 00:17:08.956 "reset": true, 00:17:08.956 "compare": false, 00:17:08.956 "compare_and_write": false, 00:17:08.956 "abort": false, 00:17:08.956 "nvme_admin": false, 00:17:08.956 "nvme_io": false 00:17:08.956 }, 00:17:08.956 "driver_specific": { 00:17:08.956 "lvol": { 00:17:08.956 "lvol_store_uuid": "f469680a-1637-4146-8434-8cd9857e5476", 00:17:08.956 "base_bdev": "aio_bdev", 00:17:08.956 "thin_provision": false, 00:17:08.956 "num_allocated_clusters": 38, 00:17:08.956 "snapshot": false, 00:17:08.956 "clone": false, 00:17:08.956 "esnap_clone": false 00:17:08.956 } 00:17:08.956 } 00:17:08.956 } 00:17:08.956 ] 00:17:08.956 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:08.956 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:17:08.956 18:50:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:09.214 18:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:09.214 18:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f469680a-1637-4146-8434-8cd9857e5476 00:17:09.214 18:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:09.471 18:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:09.471 18:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b722051-2950-4719-b507-69bf7081aaa8 00:17:10.038 18:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f469680a-1637-4146-8434-8cd9857e5476 00:17:10.038 18:50:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:10.296 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.296 00:17:10.296 real 0m17.571s 00:17:10.296 user 0m16.816s 00:17:10.296 sys 0m2.002s 00:17:10.296 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:10.296 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:10.296 ************************************ 00:17:10.296 END TEST lvs_grow_clean 00:17:10.296 ************************************ 00:17:10.296 18:50:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:10.296 18:50:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:10.296 18:50:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:10.296 18:50:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:10.554 ************************************ 00:17:10.554 START TEST lvs_grow_dirty 00:17:10.554 ************************************ 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.554 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:10.812 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:10.812 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:11.070 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:11.070 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:11.070 18:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:11.328 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:11.328 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:11.328 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 759e692e-cb3a-4065-93f8-0eeb01682d7e lvol 150 00:17:11.587 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e871dae5-5b42-45e0-a98f-6f4ed675e4c9 00:17:11.587 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:11.587 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:11.845 [2024-07-25 18:50:23.587424] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:11.845 [2024-07-25 18:50:23.587529] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:11.845 true 00:17:11.845 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:11.845 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:12.103 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:12.103 18:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:12.361 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e871dae5-5b42-45e0-a98f-6f4ed675e4c9 00:17:12.618 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:12.876 [2024-07-25 18:50:24.566365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.876 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3516352 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3516352 /var/tmp/bdevperf.sock 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3516352 ']' 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:13.133 18:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:13.133 [2024-07-25 18:50:24.873160] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:13.133 [2024-07-25 18:50:24.873246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3516352 ] 00:17:13.133 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.133 [2024-07-25 18:50:24.939552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.390 [2024-07-25 18:50:25.030653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.390 18:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:13.390 18:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:13.390 18:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:13.647 Nvme0n1 00:17:13.904 18:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:13.904 [ 00:17:13.904 { 00:17:13.904 "name": "Nvme0n1", 00:17:13.904 "aliases": [ 00:17:13.904 "e871dae5-5b42-45e0-a98f-6f4ed675e4c9" 00:17:13.904 ], 00:17:13.904 "product_name": "NVMe disk", 00:17:13.904 "block_size": 4096, 00:17:13.904 "num_blocks": 38912, 00:17:13.904 "uuid": "e871dae5-5b42-45e0-a98f-6f4ed675e4c9", 00:17:13.904 "assigned_rate_limits": { 00:17:13.904 "rw_ios_per_sec": 0, 00:17:13.904 "rw_mbytes_per_sec": 0, 00:17:13.904 "r_mbytes_per_sec": 0, 00:17:13.904 "w_mbytes_per_sec": 0 00:17:13.904 }, 00:17:13.904 "claimed": false, 00:17:13.904 "zoned": false, 00:17:13.904 "supported_io_types": { 00:17:13.904 "read": true, 00:17:13.904 "write": true, 00:17:13.904 "unmap": true, 00:17:13.904 "write_zeroes": true, 00:17:13.904 "flush": true, 00:17:13.904 "reset": true, 00:17:13.904 "compare": true, 00:17:13.904 "compare_and_write": true, 00:17:13.904 "abort": true, 00:17:13.904 "nvme_admin": true, 00:17:13.904 "nvme_io": true 00:17:13.904 }, 00:17:13.904 "memory_domains": [ 00:17:13.904 { 00:17:13.904 "dma_device_id": "system", 00:17:13.904 "dma_device_type": 1 00:17:13.904 } 00:17:13.904 ], 00:17:13.904 "driver_specific": { 00:17:13.904 "nvme": [ 00:17:13.904 { 00:17:13.904 "trid": { 00:17:13.904 "trtype": "TCP", 00:17:13.904 "adrfam": "IPv4", 00:17:13.904 "traddr": "10.0.0.2", 00:17:13.904 "trsvcid": "4420", 00:17:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:13.904 }, 00:17:13.904 "ctrlr_data": { 00:17:13.904 "cntlid": 1, 00:17:13.904 "vendor_id": "0x8086", 00:17:13.904 "model_number": "SPDK bdev Controller", 00:17:13.904 "serial_number": "SPDK0", 00:17:13.904 "firmware_revision": "24.05.1", 00:17:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:13.904 "oacs": { 00:17:13.904 "security": 0, 00:17:13.904 "format": 0, 00:17:13.904 "firmware": 0, 00:17:13.904 "ns_manage": 0 00:17:13.904 }, 00:17:13.904 "multi_ctrlr": true, 00:17:13.904 "ana_reporting": false 00:17:13.905 }, 00:17:13.905 "vs": { 00:17:13.905 "nvme_version": "1.3" 00:17:13.905 }, 00:17:13.905 "ns_data": { 00:17:13.905 "id": 1, 00:17:13.905 "can_share": true 00:17:13.905 } 00:17:13.905 } 00:17:13.905 ], 00:17:13.905 "mp_policy": "active_passive" 00:17:13.905 } 00:17:13.905 } 00:17:13.905 ] 00:17:14.162 18:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3516487 00:17:14.162 18:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:14.162 18:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:14.162 Running I/O for 10 seconds... 00:17:15.097 Latency(us) 00:17:15.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.097 Nvme0n1 : 1.00 14352.00 56.06 0.00 0.00 0.00 0.00 0.00 00:17:15.097 =================================================================================================================== 00:17:15.097 Total : 14352.00 56.06 0.00 0.00 0.00 0.00 0.00 00:17:15.097 00:17:16.031 18:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:16.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.031 Nvme0n1 : 2.00 14830.00 57.93 0.00 0.00 0.00 0.00 0.00 00:17:16.031 =================================================================================================================== 00:17:16.031 Total : 14830.00 57.93 0.00 0.00 0.00 0.00 0.00 00:17:16.031 00:17:16.290 true 00:17:16.290 18:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:16.290 18:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:16.548 18:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:16.548 18:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:16.548 18:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3516487 00:17:17.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.111 Nvme0n1 : 3.00 14883.00 58.14 0.00 0.00 0.00 0.00 0.00 00:17:17.111 =================================================================================================================== 00:17:17.111 Total : 14883.00 58.14 0.00 0.00 0.00 0.00 0.00 00:17:17.111 00:17:18.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.043 Nvme0n1 : 4.00 15011.00 58.64 0.00 0.00 0.00 0.00 0.00 00:17:18.043 =================================================================================================================== 00:17:18.043 Total : 15011.00 58.64 0.00 0.00 0.00 0.00 0.00 00:17:18.043 00:17:19.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.417 Nvme0n1 : 5.00 15107.60 59.01 0.00 0.00 0.00 0.00 0.00 00:17:19.417 =================================================================================================================== 00:17:19.417 Total : 15107.60 59.01 0.00 0.00 0.00 0.00 0.00 00:17:19.417 00:17:20.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.349 Nvme0n1 : 6.00 15119.33 59.06 0.00 0.00 0.00 0.00 0.00 00:17:20.349 =================================================================================================================== 00:17:20.349 Total : 15119.33 59.06 0.00 0.00 0.00 0.00 0.00 00:17:20.349 00:17:21.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.288 Nvme0n1 : 7.00 15100.29 58.99 0.00 0.00 0.00 0.00 0.00 00:17:21.288 =================================================================================================================== 00:17:21.288 Total : 15100.29 58.99 0.00 0.00 0.00 0.00 0.00 00:17:21.288 00:17:22.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.218 Nvme0n1 : 8.00 15125.88 59.09 0.00 0.00 0.00 0.00 0.00 00:17:22.218 =================================================================================================================== 00:17:22.218 Total : 15125.88 59.09 0.00 0.00 0.00 0.00 0.00 00:17:22.218 00:17:23.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.152 Nvme0n1 : 9.00 15153.00 59.19 0.00 0.00 0.00 0.00 0.00 00:17:23.152 =================================================================================================================== 00:17:23.152 Total : 15153.00 59.19 0.00 0.00 0.00 0.00 0.00 00:17:23.152 00:17:24.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.084 Nvme0n1 : 10.00 15212.50 59.42 0.00 0.00 0.00 0.00 0.00 00:17:24.084 =================================================================================================================== 00:17:24.084 Total : 15212.50 59.42 0.00 0.00 0.00 0.00 0.00 00:17:24.084 00:17:24.084 00:17:24.084 Latency(us) 00:17:24.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.084 Nvme0n1 : 10.01 15213.24 59.43 0.00 0.00 8408.92 4903.06 15728.64 00:17:24.084 =================================================================================================================== 00:17:24.084 Total : 15213.24 59.43 0.00 0.00 8408.92 4903.06 15728.64 00:17:24.084 0 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3516352 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3516352 ']' 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3516352 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3516352 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3516352' 00:17:24.084 killing process with pid 3516352 00:17:24.084 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3516352 00:17:24.084 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.084 00:17:24.084 Latency(us) 00:17:24.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.085 =================================================================================================================== 00:17:24.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.085 18:50:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3516352 00:17:24.341 18:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:24.628 18:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:25.193 18:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:25.193 18:50:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3513865 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3513865 00:17:25.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3513865 Killed "${NVMF_APP[@]}" "$@" 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:25.193 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3517813 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3517813 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3517813 ']' 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:25.450 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:25.450 [2024-07-25 18:50:37.114321] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:25.450 [2024-07-25 18:50:37.114402] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.450 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.450 [2024-07-25 18:50:37.178724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.450 [2024-07-25 18:50:37.270046] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.450 [2024-07-25 18:50:37.270139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.450 [2024-07-25 18:50:37.270155] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.450 [2024-07-25 18:50:37.270166] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.450 [2024-07-25 18:50:37.270176] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.450 [2024-07-25 18:50:37.270226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.707 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:25.707 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:25.707 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.707 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.707 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:25.707 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.707 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:25.964 [2024-07-25 18:50:37.682789] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:25.964 [2024-07-25 18:50:37.682935] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:25.964 [2024-07-25 18:50:37.682981] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:25.964 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:25.964 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e871dae5-5b42-45e0-a98f-6f4ed675e4c9 00:17:25.964 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=e871dae5-5b42-45e0-a98f-6f4ed675e4c9 00:17:25.964 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:25.964 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:25.964 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:25.964 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:25.964 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:26.221 18:50:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e871dae5-5b42-45e0-a98f-6f4ed675e4c9 -t 2000 00:17:26.479 [ 00:17:26.479 { 00:17:26.479 "name": "e871dae5-5b42-45e0-a98f-6f4ed675e4c9", 00:17:26.479 "aliases": [ 00:17:26.479 "lvs/lvol" 00:17:26.479 ], 00:17:26.479 "product_name": "Logical Volume", 00:17:26.479 "block_size": 4096, 00:17:26.479 "num_blocks": 38912, 00:17:26.479 "uuid": "e871dae5-5b42-45e0-a98f-6f4ed675e4c9", 00:17:26.479 "assigned_rate_limits": { 00:17:26.479 "rw_ios_per_sec": 0, 00:17:26.479 "rw_mbytes_per_sec": 0, 00:17:26.479 "r_mbytes_per_sec": 0, 00:17:26.479 "w_mbytes_per_sec": 0 00:17:26.479 }, 00:17:26.479 "claimed": false, 00:17:26.479 "zoned": false, 00:17:26.479 "supported_io_types": { 00:17:26.479 "read": true, 00:17:26.479 "write": true, 00:17:26.479 "unmap": true, 00:17:26.479 "write_zeroes": true, 00:17:26.479 "flush": false, 00:17:26.479 "reset": true, 00:17:26.479 "compare": false, 00:17:26.479 "compare_and_write": false, 00:17:26.479 "abort": false, 00:17:26.479 "nvme_admin": false, 00:17:26.479 "nvme_io": false 00:17:26.479 }, 00:17:26.479 "driver_specific": { 00:17:26.479 "lvol": { 00:17:26.479 "lvol_store_uuid": "759e692e-cb3a-4065-93f8-0eeb01682d7e", 00:17:26.479 "base_bdev": "aio_bdev", 00:17:26.479 "thin_provision": false, 00:17:26.479 "num_allocated_clusters": 38, 00:17:26.479 "snapshot": false, 00:17:26.479 "clone": false, 00:17:26.479 "esnap_clone": false 00:17:26.479 } 00:17:26.479 } 00:17:26.479 } 00:17:26.479 ] 00:17:26.479 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:26.479 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:26.479 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:26.736 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:26.736 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:26.736 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:26.993 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:26.993 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:27.251 [2024-07-25 18:50:38.955789] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.251 18:50:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.251 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:27.251 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:27.509 request: 00:17:27.509 { 00:17:27.509 "uuid": "759e692e-cb3a-4065-93f8-0eeb01682d7e", 00:17:27.509 "method": "bdev_lvol_get_lvstores", 00:17:27.509 "req_id": 1 00:17:27.509 } 00:17:27.509 Got JSON-RPC error response 00:17:27.509 response: 00:17:27.509 { 00:17:27.509 "code": -19, 00:17:27.509 "message": "No such device" 00:17:27.509 } 00:17:27.509 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:27.509 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:27.509 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:27.509 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:27.509 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:27.767 aio_bdev 00:17:27.767 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e871dae5-5b42-45e0-a98f-6f4ed675e4c9 00:17:27.767 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=e871dae5-5b42-45e0-a98f-6f4ed675e4c9 00:17:27.767 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:27.767 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:27.767 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:27.767 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:27.767 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:28.025 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e871dae5-5b42-45e0-a98f-6f4ed675e4c9 -t 2000 00:17:28.283 [ 00:17:28.283 { 00:17:28.283 "name": "e871dae5-5b42-45e0-a98f-6f4ed675e4c9", 00:17:28.283 "aliases": [ 00:17:28.283 "lvs/lvol" 00:17:28.283 ], 00:17:28.283 "product_name": "Logical Volume", 00:17:28.283 "block_size": 4096, 00:17:28.283 "num_blocks": 38912, 00:17:28.283 "uuid": "e871dae5-5b42-45e0-a98f-6f4ed675e4c9", 00:17:28.283 "assigned_rate_limits": { 00:17:28.283 "rw_ios_per_sec": 0, 00:17:28.283 "rw_mbytes_per_sec": 0, 00:17:28.283 "r_mbytes_per_sec": 0, 00:17:28.283 "w_mbytes_per_sec": 0 00:17:28.283 }, 00:17:28.283 "claimed": false, 00:17:28.283 "zoned": false, 00:17:28.283 "supported_io_types": { 00:17:28.283 "read": true, 00:17:28.283 "write": true, 00:17:28.283 "unmap": true, 00:17:28.283 "write_zeroes": true, 00:17:28.283 "flush": false, 00:17:28.283 "reset": true, 00:17:28.283 "compare": false, 00:17:28.283 "compare_and_write": false, 00:17:28.283 "abort": false, 00:17:28.283 "nvme_admin": false, 00:17:28.283 "nvme_io": false 00:17:28.283 }, 00:17:28.283 "driver_specific": { 00:17:28.283 "lvol": { 00:17:28.283 "lvol_store_uuid": "759e692e-cb3a-4065-93f8-0eeb01682d7e", 00:17:28.283 "base_bdev": "aio_bdev", 00:17:28.283 "thin_provision": false, 00:17:28.283 "num_allocated_clusters": 38, 00:17:28.283 "snapshot": false, 00:17:28.283 "clone": false, 00:17:28.283 "esnap_clone": false 00:17:28.283 } 00:17:28.283 } 00:17:28.283 } 00:17:28.283 ] 00:17:28.283 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:28.283 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:28.283 18:50:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:28.540 18:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:28.540 18:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:28.540 18:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:28.796 18:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:28.796 18:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e871dae5-5b42-45e0-a98f-6f4ed675e4c9 00:17:29.054 18:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 759e692e-cb3a-4065-93f8-0eeb01682d7e 00:17:29.312 18:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:29.572 00:17:29.572 real 0m19.167s 00:17:29.572 user 0m48.352s 00:17:29.572 sys 0m4.797s 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:29.572 ************************************ 00:17:29.572 END TEST lvs_grow_dirty 00:17:29.572 ************************************ 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:29.572 nvmf_trace.0 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.572 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.572 rmmod nvme_tcp 00:17:29.572 rmmod nvme_fabrics 00:17:29.831 rmmod nvme_keyring 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3517813 ']' 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3517813 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3517813 ']' 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3517813 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3517813 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3517813' 00:17:29.831 killing process with pid 3517813 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3517813 00:17:29.831 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3517813 00:17:30.090 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.090 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.090 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.090 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.090 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.090 18:50:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.090 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.090 18:50:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.033 18:50:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.033 00:17:32.033 real 0m41.908s 00:17:32.033 user 1m10.818s 00:17:32.033 sys 0m8.579s 00:17:32.033 18:50:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.033 18:50:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:32.033 ************************************ 00:17:32.033 END TEST nvmf_lvs_grow 00:17:32.033 ************************************ 00:17:32.033 18:50:43 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:32.033 18:50:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:32.033 18:50:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:32.033 18:50:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:32.033 ************************************ 00:17:32.033 START TEST nvmf_bdev_io_wait 00:17:32.033 ************************************ 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:32.034 * Looking for test storage... 00:17:32.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.034 18:50:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:34.561 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:34.561 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:34.561 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:34.561 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:34.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:17:34.561 00:17:34.561 --- 10.0.0.2 ping statistics --- 00:17:34.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.561 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:17:34.561 00:17:34.561 --- 10.0.0.1 ping statistics --- 00:17:34.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.561 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:34.561 18:50:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3520327 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3520327 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3520327 ']' 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 [2024-07-25 18:50:46.060771] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:34.561 [2024-07-25 18:50:46.060868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.561 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.561 [2024-07-25 18:50:46.134625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.561 [2024-07-25 18:50:46.230904] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.561 [2024-07-25 18:50:46.230967] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.561 [2024-07-25 18:50:46.230983] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.561 [2024-07-25 18:50:46.231010] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.561 [2024-07-25 18:50:46.231021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.561 [2024-07-25 18:50:46.235086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.561 [2024-07-25 18:50:46.235136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.561 [2024-07-25 18:50:46.235221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.561 [2024-07-25 18:50:46.235225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 [2024-07-25 18:50:46.391713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 Malloc0 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.561 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:34.819 [2024-07-25 18:50:46.450453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3520358 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3520360 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:34.819 { 00:17:34.819 "params": { 00:17:34.819 "name": "Nvme$subsystem", 00:17:34.819 "trtype": "$TEST_TRANSPORT", 00:17:34.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.819 "adrfam": "ipv4", 00:17:34.819 "trsvcid": "$NVMF_PORT", 00:17:34.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.819 "hdgst": ${hdgst:-false}, 00:17:34.819 "ddgst": ${ddgst:-false} 00:17:34.819 }, 00:17:34.819 "method": "bdev_nvme_attach_controller" 00:17:34.819 } 00:17:34.819 EOF 00:17:34.819 )") 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3520362 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:34.819 { 00:17:34.819 "params": { 00:17:34.819 "name": "Nvme$subsystem", 00:17:34.819 "trtype": "$TEST_TRANSPORT", 00:17:34.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.819 "adrfam": "ipv4", 00:17:34.819 "trsvcid": "$NVMF_PORT", 00:17:34.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.819 "hdgst": ${hdgst:-false}, 00:17:34.819 "ddgst": ${ddgst:-false} 00:17:34.819 }, 00:17:34.819 "method": "bdev_nvme_attach_controller" 00:17:34.819 } 00:17:34.819 EOF 00:17:34.819 )") 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3520365 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:34.819 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:34.820 { 00:17:34.820 "params": { 00:17:34.820 "name": "Nvme$subsystem", 00:17:34.820 "trtype": "$TEST_TRANSPORT", 00:17:34.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.820 "adrfam": "ipv4", 00:17:34.820 "trsvcid": "$NVMF_PORT", 00:17:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.820 "hdgst": ${hdgst:-false}, 00:17:34.820 "ddgst": ${ddgst:-false} 00:17:34.820 }, 00:17:34.820 "method": "bdev_nvme_attach_controller" 00:17:34.820 } 00:17:34.820 EOF 00:17:34.820 )") 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:34.820 { 00:17:34.820 "params": { 00:17:34.820 "name": "Nvme$subsystem", 00:17:34.820 "trtype": "$TEST_TRANSPORT", 00:17:34.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.820 "adrfam": "ipv4", 00:17:34.820 "trsvcid": "$NVMF_PORT", 00:17:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.820 "hdgst": ${hdgst:-false}, 00:17:34.820 "ddgst": ${ddgst:-false} 00:17:34.820 }, 00:17:34.820 "method": "bdev_nvme_attach_controller" 00:17:34.820 } 00:17:34.820 EOF 00:17:34.820 )") 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3520358 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:34.820 "params": { 00:17:34.820 "name": "Nvme1", 00:17:34.820 "trtype": "tcp", 00:17:34.820 "traddr": "10.0.0.2", 00:17:34.820 "adrfam": "ipv4", 00:17:34.820 "trsvcid": "4420", 00:17:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.820 "hdgst": false, 00:17:34.820 "ddgst": false 00:17:34.820 }, 00:17:34.820 "method": "bdev_nvme_attach_controller" 00:17:34.820 }' 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:34.820 "params": { 00:17:34.820 "name": "Nvme1", 00:17:34.820 "trtype": "tcp", 00:17:34.820 "traddr": "10.0.0.2", 00:17:34.820 "adrfam": "ipv4", 00:17:34.820 "trsvcid": "4420", 00:17:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.820 "hdgst": false, 00:17:34.820 "ddgst": false 00:17:34.820 }, 00:17:34.820 "method": "bdev_nvme_attach_controller" 00:17:34.820 }' 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:34.820 "params": { 00:17:34.820 "name": "Nvme1", 00:17:34.820 "trtype": "tcp", 00:17:34.820 "traddr": "10.0.0.2", 00:17:34.820 "adrfam": "ipv4", 00:17:34.820 "trsvcid": "4420", 00:17:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.820 "hdgst": false, 00:17:34.820 "ddgst": false 00:17:34.820 }, 00:17:34.820 "method": "bdev_nvme_attach_controller" 00:17:34.820 }' 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:34.820 18:50:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:34.820 "params": { 00:17:34.820 "name": "Nvme1", 00:17:34.820 "trtype": "tcp", 00:17:34.820 "traddr": "10.0.0.2", 00:17:34.820 "adrfam": "ipv4", 00:17:34.820 "trsvcid": "4420", 00:17:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.820 "hdgst": false, 00:17:34.820 "ddgst": false 00:17:34.820 }, 00:17:34.820 "method": "bdev_nvme_attach_controller" 00:17:34.820 }' 00:17:34.820 [2024-07-25 18:50:46.495128] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:34.820 [2024-07-25 18:50:46.495127] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:34.820 [2024-07-25 18:50:46.495157] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:34.820 [2024-07-25 18:50:46.495214] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 18:50:46.495215] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 18:50:46.495214] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:34.820 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:34.820 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:34.820 [2024-07-25 18:50:46.496511] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:34.820 [2024-07-25 18:50:46.496578] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:34.820 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.820 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.820 [2024-07-25 18:50:46.669584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.077 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.077 [2024-07-25 18:50:46.745134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:35.077 [2024-07-25 18:50:46.769654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.077 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.077 [2024-07-25 18:50:46.846355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:35.077 [2024-07-25 18:50:46.870789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.077 [2024-07-25 18:50:46.940472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.077 [2024-07-25 18:50:46.944842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:35.335 [2024-07-25 18:50:47.010374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:35.335 Running I/O for 1 seconds... 00:17:35.335 Running I/O for 1 seconds... 00:17:35.592 Running I/O for 1 seconds... 00:17:35.592 Running I/O for 1 seconds... 00:17:36.524 00:17:36.524 Latency(us) 00:17:36.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.524 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:36.524 Nvme1n1 : 1.00 159475.88 622.95 0.00 0.00 799.54 320.09 1080.13 00:17:36.524 =================================================================================================================== 00:17:36.524 Total : 159475.88 622.95 0.00 0.00 799.54 320.09 1080.13 00:17:36.524 00:17:36.524 Latency(us) 00:17:36.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.524 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:36.524 Nvme1n1 : 1.02 7098.66 27.73 0.00 0.00 17882.64 9223.59 28738.75 00:17:36.524 =================================================================================================================== 00:17:36.524 Total : 7098.66 27.73 0.00 0.00 17882.64 9223.59 28738.75 00:17:36.524 00:17:36.524 Latency(us) 00:17:36.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.524 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:36.524 Nvme1n1 : 1.01 9346.01 36.51 0.00 0.00 13632.66 8058.50 26408.58 00:17:36.524 =================================================================================================================== 00:17:36.524 Total : 9346.01 36.51 0.00 0.00 13632.66 8058.50 26408.58 00:17:36.524 00:17:36.524 Latency(us) 00:17:36.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.524 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:36.524 Nvme1n1 : 1.00 7065.69 27.60 0.00 0.00 18064.68 4757.43 43302.31 00:17:36.524 =================================================================================================================== 00:17:36.524 Total : 7065.69 27.60 0.00 0.00 18064.68 4757.43 43302.31 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3520360 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3520362 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3520365 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:36.781 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:36.782 rmmod nvme_tcp 00:17:36.782 rmmod nvme_fabrics 00:17:36.782 rmmod nvme_keyring 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3520327 ']' 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3520327 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3520327 ']' 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3520327 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:36.782 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3520327 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3520327' 00:17:37.039 killing process with pid 3520327 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3520327 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3520327 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.039 18:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.595 18:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:39.595 00:17:39.595 real 0m7.149s 00:17:39.595 user 0m16.499s 00:17:39.595 sys 0m3.496s 00:17:39.595 18:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:39.595 18:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:39.595 ************************************ 00:17:39.595 END TEST nvmf_bdev_io_wait 00:17:39.595 ************************************ 00:17:39.595 18:50:50 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:39.595 18:50:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:39.595 18:50:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:39.595 18:50:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.595 ************************************ 00:17:39.595 START TEST nvmf_queue_depth 00:17:39.595 ************************************ 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:39.595 * Looking for test storage... 00:17:39.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:39.595 18:50:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:41.009 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:41.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:41.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:41.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:41.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.010 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:41.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:17:41.269 00:17:41.269 --- 10.0.0.2 ping statistics --- 00:17:41.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.269 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:17:41.269 00:17:41.269 --- 10.0.0.1 ping statistics --- 00:17:41.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.269 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.269 18:50:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3522576 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3522576 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3522576 ']' 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:41.269 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.269 [2024-07-25 18:50:53.066410] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:41.269 [2024-07-25 18:50:53.066488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.269 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.269 [2024-07-25 18:50:53.132316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.528 [2024-07-25 18:50:53.218691] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.528 [2024-07-25 18:50:53.218744] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.528 [2024-07-25 18:50:53.218767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.528 [2024-07-25 18:50:53.218778] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.528 [2024-07-25 18:50:53.218787] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.528 [2024-07-25 18:50:53.218812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.528 [2024-07-25 18:50:53.365167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.528 Malloc0 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:41.528 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.785 [2024-07-25 18:50:53.424772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3522601 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3522601 /var/tmp/bdevperf.sock 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3522601 ']' 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:41.785 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.785 [2024-07-25 18:50:53.470410] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:41.785 [2024-07-25 18:50:53.470472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522601 ] 00:17:41.785 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.785 [2024-07-25 18:50:53.530996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.785 [2024-07-25 18:50:53.622280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.043 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:42.043 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:42.043 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:42.043 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.043 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:42.043 NVMe0n1 00:17:42.043 18:50:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.043 18:50:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:42.043 Running I/O for 10 seconds... 00:17:54.244 00:17:54.244 Latency(us) 00:17:54.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.244 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:54.244 Verification LBA range: start 0x0 length 0x4000 00:17:54.244 NVMe0n1 : 10.07 8741.12 34.15 0.00 0.00 116629.01 12379.02 72623.60 00:17:54.244 =================================================================================================================== 00:17:54.244 Total : 8741.12 34.15 0.00 0.00 116629.01 12379.02 72623.60 00:17:54.244 0 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3522601 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3522601 ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3522601 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3522601 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3522601' 00:17:54.244 killing process with pid 3522601 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3522601 00:17:54.244 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.244 00:17:54.244 Latency(us) 00:17:54.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.244 =================================================================================================================== 00:17:54.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3522601 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:54.244 rmmod nvme_tcp 00:17:54.244 rmmod nvme_fabrics 00:17:54.244 rmmod nvme_keyring 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3522576 ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3522576 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3522576 ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3522576 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3522576 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3522576' 00:17:54.244 killing process with pid 3522576 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3522576 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3522576 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.244 18:51:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.812 18:51:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:54.812 00:17:54.812 real 0m15.623s 00:17:54.812 user 0m22.127s 00:17:54.812 sys 0m2.940s 00:17:54.812 18:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:54.812 18:51:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.812 ************************************ 00:17:54.812 END TEST nvmf_queue_depth 00:17:54.812 ************************************ 00:17:54.812 18:51:06 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:54.812 18:51:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:54.812 18:51:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:54.812 18:51:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.812 ************************************ 00:17:54.812 START TEST nvmf_target_multipath 00:17:54.812 ************************************ 00:17:54.812 18:51:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:55.071 * Looking for test storage... 00:17:55.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.071 18:51:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:56.974 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:56.974 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:56.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:56.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.974 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:56.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:17:56.975 00:17:56.975 --- 10.0.0.2 ping statistics --- 00:17:56.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.975 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:17:56.975 00:17:56.975 --- 10.0.0.1 ping statistics --- 00:17:56.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.975 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:56.975 only one NIC for nvmf test 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.975 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.975 rmmod nvme_tcp 00:17:56.975 rmmod nvme_fabrics 00:17:56.975 rmmod nvme_keyring 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.233 18:51:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:59.140 00:17:59.140 real 0m4.239s 00:17:59.140 user 0m0.754s 00:17:59.140 sys 0m1.471s 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:59.140 18:51:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:59.140 ************************************ 00:17:59.140 END TEST nvmf_target_multipath 00:17:59.140 ************************************ 00:17:59.140 18:51:10 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:59.140 18:51:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:59.140 18:51:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:59.140 18:51:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:59.140 ************************************ 00:17:59.140 START TEST nvmf_zcopy 00:17:59.140 ************************************ 00:17:59.140 18:51:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:59.140 * Looking for test storage... 00:17:59.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.140 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.399 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.399 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.399 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:59.400 18:51:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:01.304 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:01.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:01.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:01.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:01.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:01.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:18:01.305 00:18:01.305 --- 10.0.0.2 ping statistics --- 00:18:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.305 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:18:01.305 00:18:01.305 --- 10.0.0.1 ping statistics --- 00:18:01.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.305 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:01.305 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3528381 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3528381 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3528381 ']' 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:01.565 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.565 [2024-07-25 18:51:13.245230] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:01.565 [2024-07-25 18:51:13.245315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.565 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.565 [2024-07-25 18:51:13.310856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.565 [2024-07-25 18:51:13.398942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.565 [2024-07-25 18:51:13.398999] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.565 [2024-07-25 18:51:13.399027] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.565 [2024-07-25 18:51:13.399038] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.565 [2024-07-25 18:51:13.399048] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.565 [2024-07-25 18:51:13.399098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.825 [2024-07-25 18:51:13.543922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.825 [2024-07-25 18:51:13.560153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.825 malloc0 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:01.825 { 00:18:01.825 "params": { 00:18:01.825 "name": "Nvme$subsystem", 00:18:01.825 "trtype": "$TEST_TRANSPORT", 00:18:01.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.825 "adrfam": "ipv4", 00:18:01.825 "trsvcid": "$NVMF_PORT", 00:18:01.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.825 "hdgst": ${hdgst:-false}, 00:18:01.825 "ddgst": ${ddgst:-false} 00:18:01.825 }, 00:18:01.825 "method": "bdev_nvme_attach_controller" 00:18:01.825 } 00:18:01.825 EOF 00:18:01.825 )") 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:01.825 18:51:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:01.825 "params": { 00:18:01.825 "name": "Nvme1", 00:18:01.825 "trtype": "tcp", 00:18:01.825 "traddr": "10.0.0.2", 00:18:01.825 "adrfam": "ipv4", 00:18:01.825 "trsvcid": "4420", 00:18:01.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.825 "hdgst": false, 00:18:01.825 "ddgst": false 00:18:01.825 }, 00:18:01.825 "method": "bdev_nvme_attach_controller" 00:18:01.825 }' 00:18:01.825 [2024-07-25 18:51:13.642818] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:01.825 [2024-07-25 18:51:13.642903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528405 ] 00:18:01.825 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.083 [2024-07-25 18:51:13.703930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.083 [2024-07-25 18:51:13.794519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.342 Running I/O for 10 seconds... 00:18:12.362 00:18:12.363 Latency(us) 00:18:12.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.363 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:12.363 Verification LBA range: start 0x0 length 0x1000 00:18:12.363 Nvme1n1 : 10.01 5894.91 46.05 0.00 0.00 21654.23 3034.07 32234.00 00:18:12.363 =================================================================================================================== 00:18:12.363 Total : 5894.91 46.05 0.00 0.00 21654.23 3034.07 32234.00 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3529598 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:12.363 { 00:18:12.363 "params": { 00:18:12.363 "name": "Nvme$subsystem", 00:18:12.363 "trtype": "$TEST_TRANSPORT", 00:18:12.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.363 "adrfam": "ipv4", 00:18:12.363 "trsvcid": "$NVMF_PORT", 00:18:12.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.363 "hdgst": ${hdgst:-false}, 00:18:12.363 "ddgst": ${ddgst:-false} 00:18:12.363 }, 00:18:12.363 "method": "bdev_nvme_attach_controller" 00:18:12.363 } 00:18:12.363 EOF 00:18:12.363 )") 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:12.363 [2024-07-25 18:51:24.221406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.363 [2024-07-25 18:51:24.221459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:12.363 18:51:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:12.363 "params": { 00:18:12.363 "name": "Nvme1", 00:18:12.363 "trtype": "tcp", 00:18:12.363 "traddr": "10.0.0.2", 00:18:12.363 "adrfam": "ipv4", 00:18:12.363 "trsvcid": "4420", 00:18:12.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.363 "hdgst": false, 00:18:12.363 "ddgst": false 00:18:12.363 }, 00:18:12.363 "method": "bdev_nvme_attach_controller" 00:18:12.363 }' 00:18:12.363 [2024-07-25 18:51:24.229365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.363 [2024-07-25 18:51:24.229392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.363 [2024-07-25 18:51:24.237379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.363 [2024-07-25 18:51:24.237403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.245397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.245419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.253423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.253443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.257846] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:12.622 [2024-07-25 18:51:24.257907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529598 ] 00:18:12.622 [2024-07-25 18:51:24.261441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.261461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.269462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.269481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.277483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.277502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.285506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.285525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.622 [2024-07-25 18:51:24.293544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.293568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.301565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.301589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.309588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.309612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.317607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.317631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.321632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.622 [2024-07-25 18:51:24.325634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.325660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.333682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.333719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.341678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.341705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.349697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.349721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.357718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.357744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.365741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.365767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.373781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.373810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.381821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.381861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.389807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.389832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.397829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.397854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.405849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.405875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.413873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.413897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.417425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.622 [2024-07-25 18:51:24.421895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.421920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.429920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.429947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.437970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.438009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.445992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.446031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.454017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.454057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.462039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.462084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.622 [2024-07-25 18:51:24.470067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.622 [2024-07-25 18:51:24.470119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.623 [2024-07-25 18:51:24.478089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.623 [2024-07-25 18:51:24.478140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.623 [2024-07-25 18:51:24.486087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.623 [2024-07-25 18:51:24.486150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.623 [2024-07-25 18:51:24.494154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.623 [2024-07-25 18:51:24.494188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.502175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.502209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.510170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.510199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.518170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.518191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.526195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.526216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.534265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.534292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.542232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.542255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.550257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.550280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.558283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.558307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.566304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.566326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.574326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.574361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.582365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.582385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.590404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.590429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.598420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.598447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.606448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.606475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.614475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.614503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.622500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.622526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.669320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.669347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.674638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.674674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.682662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.682688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 Running I/O for 5 seconds... 00:18:12.883 [2024-07-25 18:51:24.696646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.696674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.710135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.710165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.724457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.724484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.738931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.738959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.883 [2024-07-25 18:51:24.753068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.883 [2024-07-25 18:51:24.753096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.767229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.767258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.781339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.781367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.795484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.795515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.809639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.809666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.822948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.822975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.837732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.837758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.852245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.852273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.866069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.866097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.879469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.879498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.893101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.893128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.906612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.906638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.920470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.920501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.934734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.934761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.948929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.948955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.963107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.963134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.977157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.977185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:24.991326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:24.991372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:25.005300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:25.005328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.144 [2024-07-25 18:51:25.019215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.144 [2024-07-25 18:51:25.019257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.032968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.032996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.047040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.047076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.061244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.061272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.075499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.075529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.089735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.089761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.103545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.103572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.117610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.117637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.131582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.131608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.403 [2024-07-25 18:51:25.145810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.403 [2024-07-25 18:51:25.145836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.160056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.160092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.174143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.174170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.188180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.188207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.202662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.202689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.216661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.216688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.230592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.230619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.244156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.244184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.257717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.257744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.404 [2024-07-25 18:51:25.272134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.404 [2024-07-25 18:51:25.272161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.286264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.286292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.300114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.300142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.314907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.314934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.329324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.329351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.342964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.342991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.357505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.357532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.371371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.371398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.385603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.385630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.400000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.400042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.414522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.414549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.428090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.428117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.441592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.441619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.455461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.455505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.469613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.469657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.483789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.483816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.498152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.498197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.512045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.512082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.662 [2024-07-25 18:51:25.526718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.662 [2024-07-25 18:51:25.526746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.541611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.541643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.556025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.556056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.569950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.569977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.583716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.583743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.598303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.598348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.612539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.612581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.626372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.626399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.640365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.640392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.654431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.654458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.667899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.667926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.682467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.682511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.697449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.697477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.711356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.711383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.725102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.725140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.739275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.739303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.753938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.753966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.767782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.767810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.781435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.781462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.921 [2024-07-25 18:51:25.795287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.921 [2024-07-25 18:51:25.795315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.809290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.809317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.823147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.823174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.837509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.837536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.851503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.851546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.866178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.866205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.880421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.880448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.894433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.894459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.907527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.907553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.921723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.921750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.936135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.936163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.950226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.950253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.963730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.963757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.977519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.977546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:25.991859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:25.991916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:26.006288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:26.006316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:26.021186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:26.021219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:26.035547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:26.035574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.181 [2024-07-25 18:51:26.050006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.181 [2024-07-25 18:51:26.050033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.064602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.064629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.078691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.078718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.092674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.092702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.106377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.106403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.120251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.120283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.134445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.134487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.149019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.149069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.163377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.163404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.177785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.177811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.191858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.191884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.205882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.205908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.220053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.220106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.234364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.234391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.248665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.248692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.263500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.263561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.277604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.277631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.292203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.292231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.442 [2024-07-25 18:51:26.306458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.442 [2024-07-25 18:51:26.306501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.320320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.320364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.335133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.335161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.348985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.349012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.363013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.363044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.377786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.377827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.391710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.391736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.405877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.405904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.420141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.420168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.433982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.434009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.448182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.448209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.463296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.463326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.477628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.477654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.491803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.491846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.505746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.505773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.520226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.520253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.534178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.534218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.547371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.547398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.561629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.561656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.702 [2024-07-25 18:51:26.576198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.702 [2024-07-25 18:51:26.576224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.590252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.590280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.604692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.604736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.618417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.618444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.632824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.632852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.648044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.648114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.663275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.663303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.677533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.677576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.692189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.692235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.705967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.706010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.720577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.720609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.734647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.734678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.748883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.748910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.763326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.763368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.777477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.777504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.792015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.792046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.806943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.806980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.821491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.821518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.961 [2024-07-25 18:51:26.835706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.961 [2024-07-25 18:51:26.835735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.849668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.849696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.863669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.863696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.877578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.877609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.891378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.891407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.904647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.904674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.917472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.917499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.931050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.931086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.944922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.944965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.958525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.958567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.971442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.971469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.984912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.984939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:26.997999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:26.998027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:27.011855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:27.011891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:27.025754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:27.025782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:27.039123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:27.039150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:27.052829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:27.052856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:27.066088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:27.066116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:27.079666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:27.079693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.221 [2024-07-25 18:51:27.092973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.221 [2024-07-25 18:51:27.092999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.105970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.105999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.119920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.119949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.133406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.133434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.146580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.146624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.160416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.160444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.173688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.173715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.187089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.187117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.200508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.200550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.213934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.213976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.227955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.227996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.241494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.241537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.255565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.255593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.269045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.269096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.282867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.282895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.296582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.296610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.309836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.309879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.323388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.323415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.336973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.337000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.480 [2024-07-25 18:51:27.349500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.480 [2024-07-25 18:51:27.349528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.362584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.362613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.375539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.375566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.389066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.389107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.402593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.402621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.415346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.415373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.428682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.428709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.441907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.441935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.455148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.455175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.468904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.468931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.482616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.482643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.496213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.496241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.509610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.509637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.523092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.523120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.536399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.536426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.549361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.549405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.563647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.563686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.577677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.577704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.591541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.591568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.739 [2024-07-25 18:51:27.604923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.739 [2024-07-25 18:51:27.604949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.619752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.619780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.633562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.633603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.648287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.648315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.662355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.662382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.675735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.675763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.689927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.689954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.704165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.704193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.717983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.718010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.731692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.731735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.746073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.746100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.760489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.760531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.774821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.774849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.789292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.789330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.803768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.803796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.817763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.817789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.832439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.832495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.846150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.846178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.860143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.860171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.997 [2024-07-25 18:51:27.874189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.997 [2024-07-25 18:51:27.874220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:27.888428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:27.888478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:27.902535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:27.902561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:27.916020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:27.916047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:27.930382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:27.930426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:27.945172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:27.945201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:27.959069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:27.959096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:27.972726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:27.972753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:27.987193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:27.987221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.000534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.000561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.014666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.014693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.028531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.028560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.042737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.042764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.056722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.056750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.070727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.070758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.085301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.085329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.099322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.099363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.113235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.113261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.257 [2024-07-25 18:51:28.127193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.257 [2024-07-25 18:51:28.127220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.141384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.141426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.154955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.154982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.169100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.169128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.183297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.183325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.197549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.197576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.211879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.211922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.225704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.225731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.239542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.239569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.253981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.254012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.268837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.268868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.283631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.283657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.297887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.297914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.312169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.312197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.325923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.325954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.340259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.340286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.355268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.355295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.368984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.369027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.516 [2024-07-25 18:51:28.383009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.516 [2024-07-25 18:51:28.383035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.775 [2024-07-25 18:51:28.397691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.397722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.411639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.411665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.425970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.425997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.440318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.440360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.454617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.454644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.469120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.469147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.483215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.483241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.497357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.497399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.511491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.511534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.525985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.526012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.540220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.540247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.554612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.554641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.569031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.569082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.582845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.582872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.596297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.596325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.610367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.610409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.623650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.623678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.637010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.637050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:16.776 [2024-07-25 18:51:28.650935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:16.776 [2024-07-25 18:51:28.650962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.664481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.664508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.677988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.678030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.691315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.691342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.704572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.704599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.718233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.718260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.731414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.731447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.745767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.745794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.759417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.759444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.772882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.772924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.786375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.786402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.800020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.800047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.812803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.812831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.825998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.826026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.839244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.839272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.853172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.853200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.035 [2024-07-25 18:51:28.867066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.035 [2024-07-25 18:51:28.867108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.036 [2024-07-25 18:51:28.881212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.036 [2024-07-25 18:51:28.881240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.036 [2024-07-25 18:51:28.894522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.036 [2024-07-25 18:51:28.894549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.036 [2024-07-25 18:51:28.907776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.036 [2024-07-25 18:51:28.907804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:28.921468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:28.921496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:28.934664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:28.934692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:28.948078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:28.948106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:28.961227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:28.961255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:28.974935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:28.974963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:28.988848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:28.988877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.002462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.002490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.016221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.016249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.029966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.029995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.043404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.043432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.056998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.057026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.070604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.070632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.084097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.084125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.097847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.097875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.111083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.111111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.124418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.124446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.137376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.137403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.293 [2024-07-25 18:51:29.150545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.293 [2024-07-25 18:51:29.150573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.294 [2024-07-25 18:51:29.164067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.294 [2024-07-25 18:51:29.164106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.177684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.177711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.190932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.190975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.204008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.204035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.217052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.217087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.230261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.230289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.244187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.244214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.258185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.258213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.273379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.273409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.287819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.287845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.303081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.303108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.316568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.316595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.328933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.328964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.342836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.342863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.357303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.357332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.371261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.371288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.385525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.385552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.400531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.400562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.415428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.415458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.553 [2024-07-25 18:51:29.429520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.553 [2024-07-25 18:51:29.429551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.443590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.443616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.458052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.458091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.472422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.472466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.486295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.486321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.500575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.500606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.514601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.514632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.529289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.529315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.543875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.543900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.557980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.558007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.571898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.571929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.586471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.586515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.600685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.600715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.615946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.615977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.630719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.630749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.645328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.645356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.659413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.659440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.673475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.673511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:17.813 [2024-07-25 18:51:29.688037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:17.813 [2024-07-25 18:51:29.688080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.702893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.702924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.709697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.709727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 00:18:18.073 Latency(us) 00:18:18.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.073 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:18.073 Nvme1n1 : 5.01 9086.94 70.99 0.00 0.00 14063.71 5898.24 23107.51 00:18:18.073 =================================================================================================================== 00:18:18.073 Total : 9086.94 70.99 0.00 0.00 14063.71 5898.24 23107.51 00:18:18.073 [2024-07-25 18:51:29.717716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.717744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.725739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.725767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.733807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.733855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.741832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.741881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.749858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.749907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.757872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.757916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.765887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.765930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.773924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.773971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.781949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.781996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.789967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.790012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.797986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.798032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.806016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.806072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.814042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.814136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.822068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.822127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.830105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.830149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.838114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.838160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.846137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.846185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.854149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.854187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.862149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.862174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.870181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.870213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.878219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.878266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.073 [2024-07-25 18:51:29.886235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.073 [2024-07-25 18:51:29.886280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.074 [2024-07-25 18:51:29.894233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.074 [2024-07-25 18:51:29.894268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.074 [2024-07-25 18:51:29.902236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.074 [2024-07-25 18:51:29.902259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.074 [2024-07-25 18:51:29.910305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.074 [2024-07-25 18:51:29.910351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.074 [2024-07-25 18:51:29.918341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.074 [2024-07-25 18:51:29.918389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.074 [2024-07-25 18:51:29.926326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.074 [2024-07-25 18:51:29.926377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.074 [2024-07-25 18:51:29.934318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.074 [2024-07-25 18:51:29.934355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.074 [2024-07-25 18:51:29.942352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.074 [2024-07-25 18:51:29.942376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.074 [2024-07-25 18:51:29.950393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:18.074 [2024-07-25 18:51:29.950418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:18.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3529598) - No such process 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3529598 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.333 delay0 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.333 18:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:18.333 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.333 [2024-07-25 18:51:30.070531] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:24.905 Initializing NVMe Controllers 00:18:24.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:24.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:24.905 Initialization complete. Launching workers. 00:18:24.905 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 102 00:18:24.905 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 389, failed to submit 33 00:18:24.905 success 237, unsuccess 152, failed 0 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.905 rmmod nvme_tcp 00:18:24.905 rmmod nvme_fabrics 00:18:24.905 rmmod nvme_keyring 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3528381 ']' 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3528381 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3528381 ']' 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3528381 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3528381 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3528381' 00:18:24.905 killing process with pid 3528381 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3528381 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3528381 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.905 18:51:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.906 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.906 18:51:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.815 18:51:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.815 00:18:26.815 real 0m27.633s 00:18:26.815 user 0m39.227s 00:18:26.815 sys 0m8.729s 00:18:26.815 18:51:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:26.815 18:51:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:26.815 ************************************ 00:18:26.815 END TEST nvmf_zcopy 00:18:26.815 ************************************ 00:18:26.815 18:51:38 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:26.815 18:51:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:26.815 18:51:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:26.815 18:51:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:26.815 ************************************ 00:18:26.815 START TEST nvmf_nmic 00:18:26.815 ************************************ 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:26.815 * Looking for test storage... 00:18:26.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.815 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.074 18:51:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:28.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:28.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:28.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:28.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.978 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:18:28.979 00:18:28.979 --- 10.0.0.2 ping statistics --- 00:18:28.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.979 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:18:28.979 00:18:28.979 --- 10.0.0.1 ping statistics --- 00:18:28.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.979 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3532968 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3532968 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3532968 ']' 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:28.979 18:51:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.979 [2024-07-25 18:51:40.814195] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:28.979 [2024-07-25 18:51:40.814280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.979 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.238 [2024-07-25 18:51:40.884614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:29.238 [2024-07-25 18:51:40.979607] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.238 [2024-07-25 18:51:40.979668] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.238 [2024-07-25 18:51:40.979685] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.238 [2024-07-25 18:51:40.979698] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.238 [2024-07-25 18:51:40.979710] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.238 [2024-07-25 18:51:40.979792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.238 [2024-07-25 18:51:40.979848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.238 [2024-07-25 18:51:40.979903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.238 [2024-07-25 18:51:40.979908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.238 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:29.238 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:29.238 18:51:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:29.238 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.238 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 [2024-07-25 18:51:41.145963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 Malloc0 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 [2024-07-25 18:51:41.198895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:29.498 test case1: single bdev can't be used in multiple subsystems 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.498 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.498 [2024-07-25 18:51:41.222740] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:29.498 [2024-07-25 18:51:41.222768] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:29.498 [2024-07-25 18:51:41.222798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.498 request: 00:18:29.498 { 00:18:29.498 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:29.498 "namespace": { 00:18:29.498 "bdev_name": "Malloc0", 00:18:29.498 "no_auto_visible": false 00:18:29.498 }, 00:18:29.498 "method": "nvmf_subsystem_add_ns", 00:18:29.498 "req_id": 1 00:18:29.498 } 00:18:29.498 Got JSON-RPC error response 00:18:29.498 response: 00:18:29.498 { 00:18:29.499 "code": -32602, 00:18:29.499 "message": "Invalid parameters" 00:18:29.499 } 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:29.499 Adding namespace failed - expected result. 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:29.499 test case2: host connect to nvmf target in multiple paths 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:29.499 [2024-07-25 18:51:41.230849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.499 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:30.065 18:51:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:30.999 18:51:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:30.999 18:51:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:30.999 18:51:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.999 18:51:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:30.999 18:51:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:32.936 18:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:32.936 18:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:32.936 18:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:32.936 18:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:32.936 18:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.936 18:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:32.936 18:51:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:32.936 [global] 00:18:32.936 thread=1 00:18:32.936 invalidate=1 00:18:32.936 rw=write 00:18:32.936 time_based=1 00:18:32.936 runtime=1 00:18:32.936 ioengine=libaio 00:18:32.936 direct=1 00:18:32.936 bs=4096 00:18:32.936 iodepth=1 00:18:32.936 norandommap=0 00:18:32.936 numjobs=1 00:18:32.936 00:18:32.936 verify_dump=1 00:18:32.936 verify_backlog=512 00:18:32.936 verify_state_save=0 00:18:32.936 do_verify=1 00:18:32.936 verify=crc32c-intel 00:18:32.936 [job0] 00:18:32.936 filename=/dev/nvme0n1 00:18:32.936 Could not set queue depth (nvme0n1) 00:18:32.936 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.936 fio-3.35 00:18:32.936 Starting 1 thread 00:18:34.310 00:18:34.310 job0: (groupid=0, jobs=1): err= 0: pid=3533483: Thu Jul 25 18:51:45 2024 00:18:34.310 read: IOPS=20, BW=82.8KiB/s (84.7kB/s)(84.0KiB/1015msec) 00:18:34.310 slat (nsec): min=12611, max=36776, avg=17414.57, stdev=7924.20 00:18:34.310 clat (usec): min=40894, max=42046, avg=41510.11, stdev=516.56 00:18:34.310 lat (usec): min=40912, max=42065, avg=41527.53, stdev=519.15 00:18:34.310 clat percentiles (usec): 00:18:34.310 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:34.310 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:18:34.310 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:34.310 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:34.310 | 99.99th=[42206] 00:18:34.310 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:18:34.310 slat (usec): min=11, max=29420, avg=78.98, stdev=1299.27 00:18:34.310 clat (usec): min=155, max=413, avg=194.74, stdev=22.23 00:18:34.310 lat (usec): min=166, max=29717, avg=273.72, stdev=1304.05 00:18:34.310 clat percentiles (usec): 00:18:34.310 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 182], 00:18:34.310 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:18:34.310 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 217], 00:18:34.310 | 99.00th=[ 281], 99.50th=[ 338], 99.90th=[ 416], 99.95th=[ 416], 00:18:34.310 | 99.99th=[ 416] 00:18:34.310 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:34.310 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:34.310 lat (usec) : 250=94.56%, 500=1.50% 00:18:34.310 lat (msec) : 50=3.94% 00:18:34.310 cpu : usr=1.08%, sys=1.08%, ctx=535, majf=0, minf=2 00:18:34.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.310 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.310 00:18:34.310 Run status group 0 (all jobs): 00:18:34.310 READ: bw=82.8KiB/s (84.7kB/s), 82.8KiB/s-82.8KiB/s (84.7kB/s-84.7kB/s), io=84.0KiB (86.0kB), run=1015-1015msec 00:18:34.310 WRITE: bw=2018KiB/s (2066kB/s), 2018KiB/s-2018KiB/s (2066kB/s-2066kB/s), io=2048KiB (2097kB), run=1015-1015msec 00:18:34.310 00:18:34.310 Disk stats (read/write): 00:18:34.310 nvme0n1: ios=44/512, merge=0/0, ticks=1731/99, in_queue=1830, util=98.70% 00:18:34.310 18:51:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:34.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.310 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.311 rmmod nvme_tcp 00:18:34.311 rmmod nvme_fabrics 00:18:34.311 rmmod nvme_keyring 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3532968 ']' 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3532968 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3532968 ']' 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3532968 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3532968 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3532968' 00:18:34.311 killing process with pid 3532968 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3532968 00:18:34.311 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3532968 00:18:34.569 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.569 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.569 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.569 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.569 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.569 18:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.569 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.569 18:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.107 18:51:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.107 00:18:37.107 real 0m9.818s 00:18:37.107 user 0m22.471s 00:18:37.107 sys 0m2.232s 00:18:37.107 18:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:37.107 18:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:37.107 ************************************ 00:18:37.107 END TEST nvmf_nmic 00:18:37.107 ************************************ 00:18:37.107 18:51:48 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:37.107 18:51:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:37.107 18:51:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:37.107 18:51:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:37.107 ************************************ 00:18:37.107 START TEST nvmf_fio_target 00:18:37.107 ************************************ 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:37.107 * Looking for test storage... 00:18:37.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.107 18:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:39.012 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:39.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:39.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:39.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:39.013 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:39.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:18:39.013 00:18:39.013 --- 10.0.0.2 ping statistics --- 00:18:39.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.013 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:18:39.013 00:18:39.013 --- 10.0.0.1 ping statistics --- 00:18:39.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.013 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3535556 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3535556 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3535556 ']' 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.013 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.014 [2024-07-25 18:51:50.579939] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:39.014 [2024-07-25 18:51:50.580023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.014 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.014 [2024-07-25 18:51:50.653361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.014 [2024-07-25 18:51:50.748756] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.014 [2024-07-25 18:51:50.748829] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.014 [2024-07-25 18:51:50.748846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.014 [2024-07-25 18:51:50.748878] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.014 [2024-07-25 18:51:50.748891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.014 [2024-07-25 18:51:50.748956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.014 [2024-07-25 18:51:50.749011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.014 [2024-07-25 18:51:50.749084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.014 [2024-07-25 18:51:50.749088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.014 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:39.014 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:39.014 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.014 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.014 18:51:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.273 18:51:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.273 18:51:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:39.273 [2024-07-25 18:51:51.137750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.533 18:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:39.791 18:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:39.791 18:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:40.048 18:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:40.048 18:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:40.306 18:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:40.306 18:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:40.564 18:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:40.564 18:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:40.821 18:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:41.079 18:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:41.079 18:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:41.337 18:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:41.337 18:51:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:41.595 18:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:41.595 18:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:41.854 18:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.113 18:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:42.113 18:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.372 18:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:42.372 18:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:42.631 18:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.631 [2024-07-25 18:51:54.487455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.631 18:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:42.889 18:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:43.146 18:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:43.713 18:51:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:43.713 18:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:43.713 18:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.713 18:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:43.713 18:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:43.713 18:51:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:46.247 18:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:46.247 18:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:46.247 18:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.247 18:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:46.247 18:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.247 18:51:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:46.247 18:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:46.247 [global] 00:18:46.247 thread=1 00:18:46.247 invalidate=1 00:18:46.247 rw=write 00:18:46.247 time_based=1 00:18:46.247 runtime=1 00:18:46.247 ioengine=libaio 00:18:46.247 direct=1 00:18:46.247 bs=4096 00:18:46.247 iodepth=1 00:18:46.247 norandommap=0 00:18:46.247 numjobs=1 00:18:46.247 00:18:46.247 verify_dump=1 00:18:46.247 verify_backlog=512 00:18:46.247 verify_state_save=0 00:18:46.247 do_verify=1 00:18:46.247 verify=crc32c-intel 00:18:46.247 [job0] 00:18:46.247 filename=/dev/nvme0n1 00:18:46.247 [job1] 00:18:46.247 filename=/dev/nvme0n2 00:18:46.247 [job2] 00:18:46.248 filename=/dev/nvme0n3 00:18:46.248 [job3] 00:18:46.248 filename=/dev/nvme0n4 00:18:46.248 Could not set queue depth (nvme0n1) 00:18:46.248 Could not set queue depth (nvme0n2) 00:18:46.248 Could not set queue depth (nvme0n3) 00:18:46.248 Could not set queue depth (nvme0n4) 00:18:46.248 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.248 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.248 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.248 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.248 fio-3.35 00:18:46.248 Starting 4 threads 00:18:47.182 00:18:47.182 job0: (groupid=0, jobs=1): err= 0: pid=3536625: Thu Jul 25 18:51:59 2024 00:18:47.182 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:47.182 slat (nsec): min=5700, max=67838, avg=11944.44, stdev=6216.70 00:18:47.182 clat (usec): min=211, max=41123, avg=357.90, stdev=1366.94 00:18:47.182 lat (usec): min=218, max=41130, avg=369.85, stdev=1367.15 00:18:47.182 clat percentiles (usec): 00:18:47.182 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:18:47.182 | 30.00th=[ 262], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:18:47.182 | 70.00th=[ 318], 80.00th=[ 355], 90.00th=[ 412], 95.00th=[ 461], 00:18:47.182 | 99.00th=[ 545], 99.50th=[ 594], 99.90th=[34866], 99.95th=[41157], 00:18:47.182 | 99.99th=[41157] 00:18:47.182 write: IOPS=1843, BW=7373KiB/s (7550kB/s)(7380KiB/1001msec); 0 zone resets 00:18:47.182 slat (nsec): min=7626, max=76180, avg=11904.46, stdev=6030.75 00:18:47.182 clat (usec): min=150, max=784, avg=215.68, stdev=58.69 00:18:47.182 lat (usec): min=158, max=798, avg=227.58, stdev=60.95 00:18:47.182 clat percentiles (usec): 00:18:47.182 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:18:47.182 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 202], 60.00th=[ 215], 00:18:47.182 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 285], 95.00th=[ 330], 00:18:47.182 | 99.00th=[ 429], 99.50th=[ 453], 99.90th=[ 693], 99.95th=[ 783], 00:18:47.182 | 99.99th=[ 783] 00:18:47.182 bw ( KiB/s): min= 8192, max= 8192, per=38.63%, avg=8192.00, stdev= 0.00, samples=1 00:18:47.182 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:47.182 lat (usec) : 250=53.74%, 500=44.96%, 750=1.18%, 1000=0.03% 00:18:47.182 lat (msec) : 2=0.03%, 50=0.06% 00:18:47.182 cpu : usr=3.20%, sys=5.40%, ctx=3383, majf=0, minf=1 00:18:47.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.182 issued rwts: total=1536,1845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:47.182 job1: (groupid=0, jobs=1): err= 0: pid=3536626: Thu Jul 25 18:51:59 2024 00:18:47.182 read: IOPS=1223, BW=4895KiB/s (5012kB/s)(4924KiB/1006msec) 00:18:47.182 slat (nsec): min=5495, max=63725, avg=12242.19, stdev=8892.45 00:18:47.182 clat (usec): min=218, max=42034, avg=496.96, stdev=2878.68 00:18:47.182 lat (usec): min=224, max=42051, avg=509.20, stdev=2879.03 00:18:47.182 clat percentiles (usec): 00:18:47.182 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:18:47.182 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:18:47.182 | 70.00th=[ 306], 80.00th=[ 330], 90.00th=[ 375], 95.00th=[ 424], 00:18:47.182 | 99.00th=[ 562], 99.50th=[ 1565], 99.90th=[42206], 99.95th=[42206], 00:18:47.182 | 99.99th=[42206] 00:18:47.182 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:18:47.182 slat (nsec): min=7344, max=40168, avg=11509.12, stdev=5102.64 00:18:47.182 clat (usec): min=145, max=1069, avg=228.89, stdev=62.42 00:18:47.182 lat (usec): min=153, max=1079, avg=240.40, stdev=63.12 00:18:47.182 clat percentiles (usec): 00:18:47.182 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 182], 00:18:47.182 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 231], 00:18:47.182 | 70.00th=[ 239], 80.00th=[ 255], 90.00th=[ 285], 95.00th=[ 359], 00:18:47.182 | 99.00th=[ 412], 99.50th=[ 453], 99.90th=[ 996], 99.95th=[ 1074], 00:18:47.182 | 99.99th=[ 1074] 00:18:47.182 bw ( KiB/s): min= 4096, max= 8192, per=28.97%, avg=6144.00, stdev=2896.31, samples=2 00:18:47.182 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:47.182 lat (usec) : 250=50.99%, 500=48.14%, 750=0.47%, 1000=0.04% 00:18:47.182 lat (msec) : 2=0.14%, 50=0.22% 00:18:47.182 cpu : usr=2.29%, sys=3.98%, ctx=2768, majf=0, minf=1 00:18:47.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.182 issued rwts: total=1231,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:47.183 job2: (groupid=0, jobs=1): err= 0: pid=3536632: Thu Jul 25 18:51:59 2024 00:18:47.183 read: IOPS=504, BW=2020KiB/s (2068kB/s)(2068KiB/1024msec) 00:18:47.183 slat (nsec): min=5949, max=41682, avg=10967.09, stdev=6040.00 00:18:47.183 clat (usec): min=223, max=42126, avg=1468.15, stdev=6712.65 00:18:47.183 lat (usec): min=229, max=42133, avg=1479.11, stdev=6713.90 00:18:47.183 clat percentiles (usec): 00:18:47.183 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 255], 00:18:47.183 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 334], 00:18:47.183 | 70.00th=[ 351], 80.00th=[ 383], 90.00th=[ 433], 95.00th=[ 498], 00:18:47.183 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:47.183 | 99.99th=[42206] 00:18:47.183 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:18:47.183 slat (nsec): min=8169, max=59111, avg=15303.83, stdev=8240.46 00:18:47.183 clat (usec): min=164, max=536, avg=232.05, stdev=52.09 00:18:47.183 lat (usec): min=172, max=550, avg=247.35, stdev=54.95 00:18:47.183 clat percentiles (usec): 00:18:47.183 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 198], 00:18:47.183 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:18:47.183 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 351], 00:18:47.183 | 99.00th=[ 461], 99.50th=[ 494], 99.90th=[ 537], 99.95th=[ 537], 00:18:47.183 | 99.99th=[ 537] 00:18:47.183 bw ( KiB/s): min= 4096, max= 4096, per=19.31%, avg=4096.00, stdev= 0.00, samples=2 00:18:47.183 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:18:47.183 lat (usec) : 250=62.04%, 500=36.08%, 750=0.84% 00:18:47.183 lat (msec) : 2=0.06%, 20=0.06%, 50=0.91% 00:18:47.183 cpu : usr=1.96%, sys=2.25%, ctx=1544, majf=0, minf=1 00:18:47.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.183 issued rwts: total=517,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:47.183 job3: (groupid=0, jobs=1): err= 0: pid=3536633: Thu Jul 25 18:51:59 2024 00:18:47.183 read: IOPS=512, BW=2051KiB/s (2101kB/s)(2080KiB/1014msec) 00:18:47.183 slat (nsec): min=5593, max=48366, avg=8819.57, stdev=4782.81 00:18:47.183 clat (usec): min=205, max=41378, avg=1366.11, stdev=6346.48 00:18:47.183 lat (usec): min=211, max=41393, avg=1374.93, stdev=6348.33 00:18:47.183 clat percentiles (usec): 00:18:47.183 | 1.00th=[ 221], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:18:47.183 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 347], 00:18:47.183 | 70.00th=[ 392], 80.00th=[ 420], 90.00th=[ 490], 95.00th=[ 562], 00:18:47.183 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:47.183 | 99.99th=[41157] 00:18:47.183 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:18:47.183 slat (nsec): min=6594, max=41321, avg=13889.70, stdev=6831.46 00:18:47.183 clat (usec): min=163, max=2737, avg=273.07, stdev=130.01 00:18:47.183 lat (usec): min=170, max=2746, avg=286.96, stdev=131.04 00:18:47.183 clat percentiles (usec): 00:18:47.183 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 196], 20.00th=[ 212], 00:18:47.183 | 30.00th=[ 227], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 260], 00:18:47.183 | 70.00th=[ 285], 80.00th=[ 326], 90.00th=[ 383], 95.00th=[ 408], 00:18:47.183 | 99.00th=[ 490], 99.50th=[ 635], 99.90th=[ 2606], 99.95th=[ 2737], 00:18:47.183 | 99.99th=[ 2737] 00:18:47.183 bw ( KiB/s): min= 4096, max= 4096, per=19.31%, avg=4096.00, stdev= 0.00, samples=2 00:18:47.183 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:18:47.183 lat (usec) : 250=37.18%, 500=59.39%, 750=2.40%, 1000=0.06% 00:18:47.183 lat (msec) : 4=0.13%, 50=0.84% 00:18:47.183 cpu : usr=1.38%, sys=2.07%, ctx=1544, majf=0, minf=2 00:18:47.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.183 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:47.183 00:18:47.183 Run status group 0 (all jobs): 00:18:47.183 READ: bw=14.5MiB/s (15.2MB/s), 2020KiB/s-6138KiB/s (2068kB/s-6285kB/s), io=14.9MiB (15.6MB), run=1001-1024msec 00:18:47.183 WRITE: bw=20.7MiB/s (21.7MB/s), 4000KiB/s-7373KiB/s (4096kB/s-7550kB/s), io=21.2MiB (22.2MB), run=1001-1024msec 00:18:47.183 00:18:47.183 Disk stats (read/write): 00:18:47.183 nvme0n1: ios=1286/1536, merge=0/0, ticks=1318/322, in_queue=1640, util=85.27% 00:18:47.183 nvme0n2: ios=1195/1536, merge=0/0, ticks=1181/342, in_queue=1523, util=89.22% 00:18:47.183 nvme0n3: ios=569/1024, merge=0/0, ticks=713/230, in_queue=943, util=93.30% 00:18:47.183 nvme0n4: ios=572/1024, merge=0/0, ticks=580/267, in_queue=847, util=95.35% 00:18:47.183 18:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:47.183 [global] 00:18:47.183 thread=1 00:18:47.183 invalidate=1 00:18:47.183 rw=randwrite 00:18:47.183 time_based=1 00:18:47.183 runtime=1 00:18:47.183 ioengine=libaio 00:18:47.183 direct=1 00:18:47.183 bs=4096 00:18:47.183 iodepth=1 00:18:47.183 norandommap=0 00:18:47.183 numjobs=1 00:18:47.183 00:18:47.183 verify_dump=1 00:18:47.183 verify_backlog=512 00:18:47.183 verify_state_save=0 00:18:47.183 do_verify=1 00:18:47.183 verify=crc32c-intel 00:18:47.183 [job0] 00:18:47.183 filename=/dev/nvme0n1 00:18:47.442 [job1] 00:18:47.442 filename=/dev/nvme0n2 00:18:47.442 [job2] 00:18:47.442 filename=/dev/nvme0n3 00:18:47.442 [job3] 00:18:47.442 filename=/dev/nvme0n4 00:18:47.442 Could not set queue depth (nvme0n1) 00:18:47.442 Could not set queue depth (nvme0n2) 00:18:47.442 Could not set queue depth (nvme0n3) 00:18:47.442 Could not set queue depth (nvme0n4) 00:18:47.442 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.442 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.442 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.442 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.442 fio-3.35 00:18:47.442 Starting 4 threads 00:18:48.818 00:18:48.818 job0: (groupid=0, jobs=1): err= 0: pid=3536853: Thu Jul 25 18:52:00 2024 00:18:48.818 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1017msec) 00:18:48.818 slat (nsec): min=12240, max=33265, avg=18791.65, stdev=8598.51 00:18:48.818 clat (usec): min=37683, max=44008, avg=40946.80, stdev=1030.51 00:18:48.818 lat (usec): min=37716, max=44022, avg=40965.60, stdev=1027.38 00:18:48.818 clat percentiles (usec): 00:18:48.818 | 1.00th=[37487], 5.00th=[37487], 10.00th=[40633], 20.00th=[41157], 00:18:48.818 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:48.818 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:48.818 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:18:48.818 | 99.99th=[43779] 00:18:48.818 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:18:48.818 slat (nsec): min=8301, max=45999, avg=14095.67, stdev=5319.12 00:18:48.818 clat (usec): min=145, max=902, avg=367.92, stdev=85.87 00:18:48.818 lat (usec): min=163, max=912, avg=382.02, stdev=85.96 00:18:48.818 clat percentiles (usec): 00:18:48.818 | 1.00th=[ 180], 5.00th=[ 231], 10.00th=[ 273], 20.00th=[ 310], 00:18:48.818 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 379], 00:18:48.818 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 486], 95.00th=[ 515], 00:18:48.818 | 99.00th=[ 545], 99.50th=[ 865], 99.90th=[ 906], 99.95th=[ 906], 00:18:48.818 | 99.99th=[ 906] 00:18:48.818 bw ( KiB/s): min= 4096, max= 4096, per=33.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:48.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:48.818 lat (usec) : 250=6.95%, 500=82.71%, 750=6.02%, 1000=0.56% 00:18:48.818 lat (msec) : 50=3.76% 00:18:48.818 cpu : usr=0.20%, sys=0.79%, ctx=532, majf=0, minf=1 00:18:48.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:48.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.818 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:48.818 job1: (groupid=0, jobs=1): err= 0: pid=3536854: Thu Jul 25 18:52:00 2024 00:18:48.818 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:18:48.818 slat (nsec): min=10641, max=34735, avg=18863.26, stdev=8717.07 00:18:48.818 clat (usec): min=424, max=45993, avg=36067.15, stdev=14136.62 00:18:48.818 lat (usec): min=457, max=46010, avg=36086.02, stdev=14130.92 00:18:48.818 clat percentiles (usec): 00:18:48.818 | 1.00th=[ 424], 5.00th=[ 441], 10.00th=[ 562], 20.00th=[40633], 00:18:48.818 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:48.818 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:48.818 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:18:48.818 | 99.99th=[45876] 00:18:48.818 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:18:48.818 slat (nsec): min=9072, max=36895, avg=15899.32, stdev=4387.07 00:18:48.818 clat (usec): min=165, max=1173, avg=335.11, stdev=91.94 00:18:48.818 lat (usec): min=174, max=1182, avg=351.01, stdev=91.38 00:18:48.818 clat percentiles (usec): 00:18:48.818 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 206], 20.00th=[ 243], 00:18:48.818 | 30.00th=[ 297], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 367], 00:18:48.818 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 424], 95.00th=[ 482], 00:18:48.818 | 99.00th=[ 537], 99.50th=[ 537], 99.90th=[ 1172], 99.95th=[ 1172], 00:18:48.818 | 99.99th=[ 1172] 00:18:48.818 bw ( KiB/s): min= 4096, max= 4096, per=33.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:48.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:48.818 lat (usec) : 250=20.19%, 500=72.52%, 750=3.36% 00:18:48.819 lat (msec) : 2=0.19%, 50=3.74% 00:18:48.819 cpu : usr=0.20%, sys=1.38%, ctx=536, majf=0, minf=2 00:18:48.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:48.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.819 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:48.819 job2: (groupid=0, jobs=1): err= 0: pid=3536855: Thu Jul 25 18:52:00 2024 00:18:48.819 read: IOPS=1021, BW=4088KiB/s (4186kB/s)(4104KiB/1004msec) 00:18:48.819 slat (nsec): min=5493, max=60989, avg=10560.22, stdev=5307.91 00:18:48.819 clat (usec): min=206, max=41123, avg=650.58, stdev=4007.39 00:18:48.819 lat (usec): min=214, max=41130, avg=661.14, stdev=4007.68 00:18:48.819 clat percentiles (usec): 00:18:48.819 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:18:48.819 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:18:48.819 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:18:48.819 | 99.00th=[ 502], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:48.819 | 99.99th=[41157] 00:18:48.819 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:18:48.819 slat (nsec): min=6007, max=44355, avg=13009.57, stdev=6678.75 00:18:48.819 clat (usec): min=140, max=1882, avg=192.63, stdev=64.56 00:18:48.819 lat (usec): min=147, max=1926, avg=205.64, stdev=67.09 00:18:48.819 clat percentiles (usec): 00:18:48.819 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:18:48.819 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 190], 00:18:48.819 | 70.00th=[ 196], 80.00th=[ 206], 90.00th=[ 231], 95.00th=[ 273], 00:18:48.819 | 99.00th=[ 379], 99.50th=[ 433], 99.90th=[ 898], 99.95th=[ 1876], 00:18:48.819 | 99.99th=[ 1876] 00:18:48.819 bw ( KiB/s): min= 4096, max= 8192, per=50.85%, avg=6144.00, stdev=2896.31, samples=2 00:18:48.819 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:48.819 lat (usec) : 250=73.73%, 500=25.64%, 750=0.16%, 1000=0.04% 00:18:48.819 lat (msec) : 2=0.04%, 50=0.39% 00:18:48.819 cpu : usr=2.59%, sys=3.49%, ctx=2562, majf=0, minf=1 00:18:48.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:48.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.819 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:48.819 job3: (groupid=0, jobs=1): err= 0: pid=3536856: Thu Jul 25 18:52:00 2024 00:18:48.819 read: IOPS=19, BW=79.1KiB/s (80.9kB/s)(80.0KiB/1012msec) 00:18:48.819 slat (nsec): min=8687, max=28001, avg=15409.45, stdev=4006.63 00:18:48.819 clat (usec): min=40917, max=42034, avg=41250.13, stdev=427.55 00:18:48.819 lat (usec): min=40930, max=42050, avg=41265.54, stdev=427.09 00:18:48.819 clat percentiles (usec): 00:18:48.819 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:48.819 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:48.819 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:48.819 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:48.819 | 99.99th=[42206] 00:18:48.819 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:18:48.819 slat (nsec): min=7994, max=39199, avg=12172.52, stdev=2271.85 00:18:48.819 clat (usec): min=179, max=761, avg=347.42, stdev=66.52 00:18:48.819 lat (usec): min=192, max=772, avg=359.59, stdev=66.44 00:18:48.819 clat percentiles (usec): 00:18:48.819 | 1.00th=[ 194], 5.00th=[ 223], 10.00th=[ 265], 20.00th=[ 297], 00:18:48.819 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 371], 00:18:48.819 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 433], 00:18:48.819 | 99.00th=[ 515], 99.50th=[ 619], 99.90th=[ 758], 99.95th=[ 758], 00:18:48.819 | 99.99th=[ 758] 00:18:48.819 bw ( KiB/s): min= 4096, max= 4096, per=33.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:48.819 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:48.819 lat (usec) : 250=7.71%, 500=87.41%, 750=0.94%, 1000=0.19% 00:18:48.819 lat (msec) : 50=3.76% 00:18:48.819 cpu : usr=0.49%, sys=0.49%, ctx=533, majf=0, minf=1 00:18:48.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:48.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.819 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:48.819 00:18:48.819 Run status group 0 (all jobs): 00:18:48.819 READ: bw=4283KiB/s (4386kB/s), 78.7KiB/s-4088KiB/s (80.6kB/s-4186kB/s), io=4356KiB (4461kB), run=1004-1017msec 00:18:48.819 WRITE: bw=11.8MiB/s (12.4MB/s), 2014KiB/s-6120KiB/s (2062kB/s-6266kB/s), io=12.0MiB (12.6MB), run=1004-1017msec 00:18:48.819 00:18:48.819 Disk stats (read/write): 00:18:48.819 nvme0n1: ios=65/512, merge=0/0, ticks=642/187, in_queue=829, util=85.97% 00:18:48.819 nvme0n2: ios=65/512, merge=0/0, ticks=813/166, in_queue=979, util=97.46% 00:18:48.819 nvme0n3: ios=1024/1182, merge=0/0, ticks=577/222, in_queue=799, util=88.76% 00:18:48.819 nvme0n4: ios=39/512, merge=0/0, ticks=1561/177, in_queue=1738, util=96.93% 00:18:48.819 18:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:48.819 [global] 00:18:48.819 thread=1 00:18:48.819 invalidate=1 00:18:48.819 rw=write 00:18:48.819 time_based=1 00:18:48.819 runtime=1 00:18:48.819 ioengine=libaio 00:18:48.819 direct=1 00:18:48.819 bs=4096 00:18:48.819 iodepth=128 00:18:48.819 norandommap=0 00:18:48.819 numjobs=1 00:18:48.819 00:18:48.819 verify_dump=1 00:18:48.819 verify_backlog=512 00:18:48.819 verify_state_save=0 00:18:48.819 do_verify=1 00:18:48.819 verify=crc32c-intel 00:18:48.819 [job0] 00:18:48.819 filename=/dev/nvme0n1 00:18:48.819 [job1] 00:18:48.819 filename=/dev/nvme0n2 00:18:48.819 [job2] 00:18:48.819 filename=/dev/nvme0n3 00:18:48.819 [job3] 00:18:48.819 filename=/dev/nvme0n4 00:18:48.819 Could not set queue depth (nvme0n1) 00:18:48.819 Could not set queue depth (nvme0n2) 00:18:48.819 Could not set queue depth (nvme0n3) 00:18:48.819 Could not set queue depth (nvme0n4) 00:18:49.077 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:49.077 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:49.077 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:49.077 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:49.077 fio-3.35 00:18:49.077 Starting 4 threads 00:18:50.451 00:18:50.451 job0: (groupid=0, jobs=1): err= 0: pid=3537088: Thu Jul 25 18:52:01 2024 00:18:50.451 read: IOPS=4179, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1005msec) 00:18:50.451 slat (usec): min=2, max=43703, avg=109.27, stdev=941.40 00:18:50.451 clat (usec): min=2260, max=54272, avg=15530.41, stdev=9535.29 00:18:50.451 lat (usec): min=4851, max=54275, avg=15639.68, stdev=9555.57 00:18:50.451 clat percentiles (usec): 00:18:50.451 | 1.00th=[ 6652], 5.00th=[10552], 10.00th=[11076], 20.00th=[11863], 00:18:50.451 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:18:50.451 | 70.00th=[13173], 80.00th=[13829], 90.00th=[26084], 95.00th=[46924], 00:18:50.451 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:18:50.451 | 99.99th=[54264] 00:18:50.451 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:18:50.451 slat (usec): min=3, max=36584, avg=109.90, stdev=929.88 00:18:50.451 clat (usec): min=3029, max=84382, avg=13122.74, stdev=7602.66 00:18:50.451 lat (usec): min=3042, max=84404, avg=13232.65, stdev=7646.81 00:18:50.451 clat percentiles (usec): 00:18:50.451 | 1.00th=[ 5997], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10683], 00:18:50.451 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:18:50.451 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13566], 95.00th=[14615], 00:18:50.451 | 99.00th=[58983], 99.50th=[58983], 99.90th=[84411], 99.95th=[84411], 00:18:50.451 | 99.99th=[84411] 00:18:50.451 bw ( KiB/s): min=16416, max=20288, per=25.64%, avg=18352.00, stdev=2737.92, samples=2 00:18:50.451 iops : min= 4104, max= 5072, avg=4588.00, stdev=684.48, samples=2 00:18:50.451 lat (msec) : 4=0.44%, 10=7.97%, 20=83.61%, 50=5.63%, 100=2.35% 00:18:50.451 cpu : usr=2.89%, sys=5.28%, ctx=407, majf=0, minf=1 00:18:50.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:50.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:50.451 issued rwts: total=4200,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:50.451 job1: (groupid=0, jobs=1): err= 0: pid=3537089: Thu Jul 25 18:52:01 2024 00:18:50.451 read: IOPS=4488, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1004msec) 00:18:50.451 slat (usec): min=3, max=10121, avg=110.17, stdev=636.63 00:18:50.451 clat (usec): min=2397, max=24918, avg=14154.55, stdev=2584.02 00:18:50.451 lat (usec): min=3842, max=25013, avg=14264.72, stdev=2632.62 00:18:50.451 clat percentiles (usec): 00:18:50.451 | 1.00th=[ 4883], 5.00th=[ 9896], 10.00th=[11469], 20.00th=[12125], 00:18:50.451 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14353], 60.00th=[14877], 00:18:50.451 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16909], 95.00th=[17957], 00:18:50.451 | 99.00th=[21627], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:18:50.451 | 99.99th=[25035] 00:18:50.451 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:18:50.451 slat (usec): min=4, max=7977, avg=97.36, stdev=574.41 00:18:50.451 clat (usec): min=1795, max=68153, avg=13701.52, stdev=5545.76 00:18:50.451 lat (usec): min=1805, max=68160, avg=13798.88, stdev=5573.65 00:18:50.451 clat percentiles (usec): 00:18:50.451 | 1.00th=[ 4293], 5.00th=[ 7570], 10.00th=[ 9503], 20.00th=[11207], 00:18:50.451 | 30.00th=[11600], 40.00th=[13304], 50.00th=[13960], 60.00th=[14353], 00:18:50.451 | 70.00th=[14746], 80.00th=[15139], 90.00th=[16712], 95.00th=[17433], 00:18:50.451 | 99.00th=[33424], 99.50th=[56886], 99.90th=[67634], 99.95th=[67634], 00:18:50.451 | 99.99th=[67634] 00:18:50.451 bw ( KiB/s): min=16648, max=20216, per=25.76%, avg=18432.00, stdev=2522.96, samples=2 00:18:50.451 iops : min= 4162, max= 5054, avg=4608.00, stdev=630.74, samples=2 00:18:50.451 lat (msec) : 2=0.04%, 4=0.44%, 10=8.58%, 20=88.91%, 50=1.61% 00:18:50.451 lat (msec) : 100=0.42% 00:18:50.451 cpu : usr=5.38%, sys=6.58%, ctx=403, majf=0, minf=1 00:18:50.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:50.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:50.451 issued rwts: total=4506,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:50.451 job2: (groupid=0, jobs=1): err= 0: pid=3537090: Thu Jul 25 18:52:01 2024 00:18:50.451 read: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(17.5MiB/1048msec) 00:18:50.451 slat (usec): min=2, max=10485, avg=115.44, stdev=716.44 00:18:50.451 clat (usec): min=7057, max=57552, avg=15366.42, stdev=7378.39 00:18:50.451 lat (usec): min=7286, max=63483, avg=15481.85, stdev=7407.07 00:18:50.451 clat percentiles (usec): 00:18:50.451 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[11994], 20.00th=[12780], 00:18:50.451 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[14091], 00:18:50.451 | 70.00th=[14746], 80.00th=[15926], 90.00th=[17695], 95.00th=[20317], 00:18:50.451 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:18:50.451 | 99.99th=[57410] 00:18:50.452 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1048msec); 0 zone resets 00:18:50.452 slat (usec): min=3, max=9932, avg=99.07, stdev=658.90 00:18:50.452 clat (usec): min=1008, max=42844, avg=13837.92, stdev=5113.56 00:18:50.452 lat (usec): min=1015, max=43688, avg=13936.99, stdev=5159.13 00:18:50.452 clat percentiles (usec): 00:18:50.452 | 1.00th=[ 6783], 5.00th=[ 7963], 10.00th=[ 9634], 20.00th=[11994], 00:18:50.452 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13566], 00:18:50.452 | 70.00th=[13829], 80.00th=[14222], 90.00th=[16712], 95.00th=[19006], 00:18:50.452 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:18:50.452 | 99.99th=[42730] 00:18:50.452 bw ( KiB/s): min=17704, max=19160, per=25.76%, avg=18432.00, stdev=1029.55, samples=2 00:18:50.452 iops : min= 4426, max= 4790, avg=4608.00, stdev=257.39, samples=2 00:18:50.452 lat (msec) : 2=0.06%, 10=7.07%, 20=87.56%, 50=3.93%, 100=1.39% 00:18:50.452 cpu : usr=2.96%, sys=6.69%, ctx=395, majf=0, minf=1 00:18:50.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:50.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:50.452 issued rwts: total=4475,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:50.452 job3: (groupid=0, jobs=1): err= 0: pid=3537091: Thu Jul 25 18:52:01 2024 00:18:50.452 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:18:50.452 slat (usec): min=2, max=11742, avg=108.55, stdev=696.37 00:18:50.452 clat (usec): min=5295, max=25427, avg=13883.54, stdev=2624.64 00:18:50.452 lat (usec): min=5302, max=25476, avg=13992.09, stdev=2667.45 00:18:50.452 clat percentiles (usec): 00:18:50.452 | 1.00th=[ 7570], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11994], 00:18:50.452 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[14091], 00:18:50.452 | 70.00th=[14746], 80.00th=[15270], 90.00th=[17695], 95.00th=[19006], 00:18:50.452 | 99.00th=[20055], 99.50th=[20841], 99.90th=[23200], 99.95th=[25297], 00:18:50.452 | 99.99th=[25297] 00:18:50.452 write: IOPS=4911, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1003msec); 0 zone resets 00:18:50.452 slat (usec): min=3, max=14171, avg=94.18, stdev=661.78 00:18:50.452 clat (usec): min=464, max=27356, avg=12844.83, stdev=2541.64 00:18:50.452 lat (usec): min=760, max=27377, avg=12939.01, stdev=2611.80 00:18:50.452 clat percentiles (usec): 00:18:50.452 | 1.00th=[ 4555], 5.00th=[ 8291], 10.00th=[ 9765], 20.00th=[11731], 00:18:50.452 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173], 00:18:50.452 | 70.00th=[13566], 80.00th=[14091], 90.00th=[15664], 95.00th=[17171], 00:18:50.452 | 99.00th=[18744], 99.50th=[19530], 99.90th=[23200], 99.95th=[23462], 00:18:50.452 | 99.99th=[27395] 00:18:50.452 bw ( KiB/s): min=17912, max=20480, per=26.82%, avg=19196.00, stdev=1815.85, samples=2 00:18:50.452 iops : min= 4478, max= 5120, avg=4799.00, stdev=453.96, samples=2 00:18:50.452 lat (usec) : 500=0.01%, 1000=0.03% 00:18:50.452 lat (msec) : 2=0.16%, 4=0.21%, 10=8.21%, 20=90.57%, 50=0.81% 00:18:50.452 cpu : usr=4.19%, sys=5.49%, ctx=367, majf=0, minf=1 00:18:50.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:50.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:50.452 issued rwts: total=4608,4926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:50.452 00:18:50.452 Run status group 0 (all jobs): 00:18:50.452 READ: bw=66.3MiB/s (69.5MB/s), 16.3MiB/s-17.9MiB/s (17.1MB/s-18.8MB/s), io=69.5MiB (72.9MB), run=1003-1048msec 00:18:50.452 WRITE: bw=69.9MiB/s (73.3MB/s), 17.2MiB/s-19.2MiB/s (18.0MB/s-20.1MB/s), io=73.2MiB (76.8MB), run=1003-1048msec 00:18:50.452 00:18:50.452 Disk stats (read/write): 00:18:50.452 nvme0n1: ios=3620/3627, merge=0/0, ticks=20702/16406, in_queue=37108, util=96.99% 00:18:50.452 nvme0n2: ios=3750/4096, merge=0/0, ticks=29825/29258, in_queue=59083, util=96.85% 00:18:50.452 nvme0n3: ios=3607/4007, merge=0/0, ticks=27848/28965, in_queue=56813, util=100.00% 00:18:50.452 nvme0n4: ios=4025/4096, merge=0/0, ticks=32658/30621, in_queue=63279, util=89.59% 00:18:50.452 18:52:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:50.452 [global] 00:18:50.452 thread=1 00:18:50.452 invalidate=1 00:18:50.452 rw=randwrite 00:18:50.452 time_based=1 00:18:50.452 runtime=1 00:18:50.452 ioengine=libaio 00:18:50.452 direct=1 00:18:50.452 bs=4096 00:18:50.452 iodepth=128 00:18:50.452 norandommap=0 00:18:50.452 numjobs=1 00:18:50.452 00:18:50.452 verify_dump=1 00:18:50.452 verify_backlog=512 00:18:50.452 verify_state_save=0 00:18:50.452 do_verify=1 00:18:50.452 verify=crc32c-intel 00:18:50.452 [job0] 00:18:50.452 filename=/dev/nvme0n1 00:18:50.452 [job1] 00:18:50.452 filename=/dev/nvme0n2 00:18:50.452 [job2] 00:18:50.452 filename=/dev/nvme0n3 00:18:50.452 [job3] 00:18:50.452 filename=/dev/nvme0n4 00:18:50.452 Could not set queue depth (nvme0n1) 00:18:50.452 Could not set queue depth (nvme0n2) 00:18:50.452 Could not set queue depth (nvme0n3) 00:18:50.452 Could not set queue depth (nvme0n4) 00:18:50.452 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:50.452 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:50.452 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:50.452 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:50.452 fio-3.35 00:18:50.452 Starting 4 threads 00:18:51.858 00:18:51.858 job0: (groupid=0, jobs=1): err= 0: pid=3537434: Thu Jul 25 18:52:03 2024 00:18:51.858 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:18:51.858 slat (usec): min=2, max=10781, avg=104.33, stdev=715.91 00:18:51.858 clat (usec): min=4340, max=22859, avg=12673.63, stdev=3236.24 00:18:51.858 lat (usec): min=4345, max=22875, avg=12777.96, stdev=3280.99 00:18:51.858 clat percentiles (usec): 00:18:51.858 | 1.00th=[ 5473], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10683], 00:18:51.858 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:18:51.858 | 70.00th=[12256], 80.00th=[14615], 90.00th=[18220], 95.00th=[19792], 00:18:51.858 | 99.00th=[22152], 99.50th=[22676], 99.90th=[22938], 99.95th=[22938], 00:18:51.858 | 99.99th=[22938] 00:18:51.858 write: IOPS=5410, BW=21.1MiB/s (22.2MB/s)(21.4MiB/1011msec); 0 zone resets 00:18:51.858 slat (usec): min=4, max=9306, avg=76.72, stdev=301.15 00:18:51.858 clat (usec): min=1313, max=22871, avg=11513.89, stdev=2669.90 00:18:51.858 lat (usec): min=1330, max=22878, avg=11590.61, stdev=2693.97 00:18:51.858 clat percentiles (usec): 00:18:51.858 | 1.00th=[ 3589], 5.00th=[ 5669], 10.00th=[ 6980], 20.00th=[10290], 00:18:51.859 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:18:51.859 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:18:51.859 | 99.00th=[18220], 99.50th=[20317], 99.90th=[22938], 99.95th=[22938], 00:18:51.859 | 99.99th=[22938] 00:18:51.859 bw ( KiB/s): min=20848, max=21896, per=28.76%, avg=21372.00, stdev=741.05, samples=2 00:18:51.859 iops : min= 5212, max= 5474, avg=5343.00, stdev=185.26, samples=2 00:18:51.859 lat (msec) : 2=0.02%, 4=0.72%, 10=13.25%, 20=83.28%, 50=2.74% 00:18:51.859 cpu : usr=5.74%, sys=9.11%, ctx=703, majf=0, minf=1 00:18:51.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:51.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.859 issued rwts: total=5120,5470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.859 job1: (groupid=0, jobs=1): err= 0: pid=3537435: Thu Jul 25 18:52:03 2024 00:18:51.859 read: IOPS=4683, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1011msec) 00:18:51.859 slat (usec): min=2, max=13425, avg=107.26, stdev=659.69 00:18:51.859 clat (usec): min=1235, max=28772, avg=13489.53, stdev=3523.17 00:18:51.859 lat (usec): min=7533, max=28778, avg=13596.79, stdev=3561.07 00:18:51.859 clat percentiles (usec): 00:18:51.859 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11600], 00:18:51.859 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[13304], 00:18:51.859 | 70.00th=[14091], 80.00th=[15270], 90.00th=[18482], 95.00th=[21365], 00:18:51.859 | 99.00th=[27395], 99.50th=[27657], 99.90th=[28181], 99.95th=[28705], 00:18:51.859 | 99.99th=[28705] 00:18:51.859 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec); 0 zone resets 00:18:51.859 slat (usec): min=3, max=6783, avg=90.02, stdev=460.92 00:18:51.859 clat (usec): min=6326, max=29454, avg=12520.48, stdev=1847.60 00:18:51.859 lat (usec): min=6355, max=29461, avg=12610.50, stdev=1884.74 00:18:51.859 clat percentiles (usec): 00:18:51.859 | 1.00th=[ 8029], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[11731], 00:18:51.859 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:18:51.859 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13960], 95.00th=[15664], 00:18:51.859 | 99.00th=[19006], 99.50th=[21103], 99.90th=[28705], 99.95th=[29492], 00:18:51.859 | 99.99th=[29492] 00:18:51.859 bw ( KiB/s): min=20472, max=20480, per=27.55%, avg=20476.00, stdev= 5.66, samples=2 00:18:51.859 iops : min= 5118, max= 5120, avg=5119.00, stdev= 1.41, samples=2 00:18:51.859 lat (msec) : 2=0.01%, 10=7.07%, 20=89.09%, 50=3.83% 00:18:51.859 cpu : usr=5.64%, sys=6.93%, ctx=561, majf=0, minf=1 00:18:51.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:51.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.859 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.859 job2: (groupid=0, jobs=1): err= 0: pid=3537436: Thu Jul 25 18:52:03 2024 00:18:51.859 read: IOPS=4023, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1006msec) 00:18:51.859 slat (usec): min=2, max=7713, avg=128.62, stdev=708.08 00:18:51.859 clat (usec): min=1303, max=26551, avg=15578.77, stdev=2728.43 00:18:51.859 lat (usec): min=6929, max=26588, avg=15707.40, stdev=2784.10 00:18:51.859 clat percentiles (usec): 00:18:51.859 | 1.00th=[ 7308], 5.00th=[10945], 10.00th=[12780], 20.00th=[13829], 00:18:51.859 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15401], 60.00th=[15795], 00:18:51.859 | 70.00th=[16057], 80.00th=[17433], 90.00th=[19530], 95.00th=[20579], 00:18:51.859 | 99.00th=[22676], 99.50th=[22676], 99.90th=[25297], 99.95th=[26084], 00:18:51.859 | 99.99th=[26608] 00:18:51.859 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:18:51.859 slat (usec): min=3, max=5152, avg=109.62, stdev=362.13 00:18:51.859 clat (usec): min=7205, max=22999, avg=15620.82, stdev=1889.75 00:18:51.859 lat (usec): min=7216, max=23016, avg=15730.44, stdev=1908.91 00:18:51.859 clat percentiles (usec): 00:18:51.859 | 1.00th=[ 9896], 5.00th=[13042], 10.00th=[13829], 20.00th=[14353], 00:18:51.859 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:18:51.859 | 70.00th=[16057], 80.00th=[16712], 90.00th=[17957], 95.00th=[19006], 00:18:51.859 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22938], 99.95th=[22938], 00:18:51.859 | 99.99th=[22938] 00:18:51.859 bw ( KiB/s): min=16384, max=16384, per=22.05%, avg=16384.00, stdev= 0.00, samples=2 00:18:51.859 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:51.859 lat (msec) : 2=0.01%, 10=1.71%, 20=92.92%, 50=5.37% 00:18:51.859 cpu : usr=4.58%, sys=6.47%, ctx=588, majf=0, minf=1 00:18:51.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:51.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.859 issued rwts: total=4048,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.859 job3: (groupid=0, jobs=1): err= 0: pid=3537437: Thu Jul 25 18:52:03 2024 00:18:51.859 read: IOPS=3607, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1011msec) 00:18:51.859 slat (usec): min=2, max=12329, avg=118.32, stdev=661.59 00:18:51.859 clat (usec): min=952, max=34853, avg=14135.12, stdev=3183.85 00:18:51.859 lat (usec): min=4969, max=36911, avg=14253.44, stdev=3224.56 00:18:51.859 clat percentiles (usec): 00:18:51.859 | 1.00th=[ 6325], 5.00th=[10028], 10.00th=[11207], 20.00th=[13173], 00:18:51.859 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:18:51.859 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15533], 95.00th=[20317], 00:18:51.859 | 99.00th=[27657], 99.50th=[29492], 99.90th=[34866], 99.95th=[34866], 00:18:51.859 | 99.99th=[34866] 00:18:51.859 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:18:51.859 slat (usec): min=3, max=9236, avg=132.84, stdev=646.12 00:18:51.859 clat (usec): min=7553, max=64096, avg=18533.39, stdev=12022.25 00:18:51.859 lat (usec): min=7561, max=64104, avg=18666.23, stdev=12108.38 00:18:51.859 clat percentiles (usec): 00:18:51.859 | 1.00th=[10159], 5.00th=[11994], 10.00th=[13173], 20.00th=[13435], 00:18:51.859 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13960], 60.00th=[14222], 00:18:51.859 | 70.00th=[14615], 80.00th=[16581], 90.00th=[44303], 95.00th=[52691], 00:18:51.859 | 99.00th=[59507], 99.50th=[62653], 99.90th=[64226], 99.95th=[64226], 00:18:51.859 | 99.99th=[64226] 00:18:51.859 bw ( KiB/s): min=13224, max=19024, per=21.70%, avg=16124.00, stdev=4101.22, samples=2 00:18:51.859 iops : min= 3306, max= 4756, avg=4031.00, stdev=1025.30, samples=2 00:18:51.859 lat (usec) : 1000=0.01% 00:18:51.859 lat (msec) : 10=2.63%, 20=87.41%, 50=6.99%, 100=2.96% 00:18:51.859 cpu : usr=4.16%, sys=6.14%, ctx=489, majf=0, minf=1 00:18:51.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:51.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.859 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.859 00:18:51.859 Run status group 0 (all jobs): 00:18:51.859 READ: bw=67.8MiB/s (71.1MB/s), 14.1MiB/s-19.8MiB/s (14.8MB/s-20.7MB/s), io=68.6MiB (71.9MB), run=1006-1011msec 00:18:51.859 WRITE: bw=72.6MiB/s (76.1MB/s), 15.8MiB/s-21.1MiB/s (16.6MB/s-22.2MB/s), io=73.4MiB (76.9MB), run=1006-1011msec 00:18:51.859 00:18:51.859 Disk stats (read/write): 00:18:51.859 nvme0n1: ios=4136/4608, merge=0/0, ticks=50779/52356, in_queue=103135, util=97.29% 00:18:51.859 nvme0n2: ios=4118/4304, merge=0/0, ticks=26634/24279, in_queue=50913, util=86.99% 00:18:51.859 nvme0n3: ios=3183/3584, merge=0/0, ticks=23541/23760, in_queue=47301, util=97.81% 00:18:51.859 nvme0n4: ios=3627/3719, merge=0/0, ticks=18372/19640, in_queue=38012, util=97.27% 00:18:51.859 18:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:51.859 18:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3537585 00:18:51.859 18:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:51.859 18:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:51.859 [global] 00:18:51.859 thread=1 00:18:51.859 invalidate=1 00:18:51.859 rw=read 00:18:51.859 time_based=1 00:18:51.859 runtime=10 00:18:51.859 ioengine=libaio 00:18:51.859 direct=1 00:18:51.859 bs=4096 00:18:51.859 iodepth=1 00:18:51.859 norandommap=1 00:18:51.859 numjobs=1 00:18:51.859 00:18:51.859 [job0] 00:18:51.859 filename=/dev/nvme0n1 00:18:51.859 [job1] 00:18:51.859 filename=/dev/nvme0n2 00:18:51.859 [job2] 00:18:51.859 filename=/dev/nvme0n3 00:18:51.859 [job3] 00:18:51.859 filename=/dev/nvme0n4 00:18:51.859 Could not set queue depth (nvme0n1) 00:18:51.859 Could not set queue depth (nvme0n2) 00:18:51.859 Could not set queue depth (nvme0n3) 00:18:51.859 Could not set queue depth (nvme0n4) 00:18:51.860 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.860 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.860 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.860 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.860 fio-3.35 00:18:51.860 Starting 4 threads 00:18:55.145 18:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:55.145 18:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:55.145 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=27316224, buflen=4096 00:18:55.145 fio: pid=3537679, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:55.145 18:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:55.145 18:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:55.145 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27852800, buflen=4096 00:18:55.145 fio: pid=3537678, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:55.403 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=31035392, buflen=4096 00:18:55.403 fio: pid=3537676, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:55.403 18:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:55.403 18:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:55.660 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=22806528, buflen=4096 00:18:55.660 fio: pid=3537677, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:55.660 18:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:55.660 18:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:55.919 00:18:55.919 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3537676: Thu Jul 25 18:52:07 2024 00:18:55.919 read: IOPS=2215, BW=8862KiB/s (9075kB/s)(29.6MiB/3420msec) 00:18:55.919 slat (usec): min=4, max=21570, avg=15.19, stdev=335.36 00:18:55.919 clat (usec): min=195, max=42367, avg=430.44, stdev=2605.42 00:18:55.919 lat (usec): min=200, max=42375, avg=445.63, stdev=2627.18 00:18:55.919 clat percentiles (usec): 00:18:55.919 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:18:55.919 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 262], 00:18:55.919 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 330], 00:18:55.919 | 99.00th=[ 523], 99.50th=[ 775], 99.90th=[42206], 99.95th=[42206], 00:18:55.919 | 99.99th=[42206] 00:18:55.919 bw ( KiB/s): min= 160, max=14440, per=27.21%, avg=7869.33, stdev=6772.73, samples=6 00:18:55.919 iops : min= 40, max= 3610, avg=1967.33, stdev=1693.18, samples=6 00:18:55.919 lat (usec) : 250=47.29%, 500=51.36%, 750=0.79%, 1000=0.12% 00:18:55.919 lat (msec) : 2=0.01%, 20=0.01%, 50=0.40% 00:18:55.919 cpu : usr=1.55%, sys=3.36%, ctx=7580, majf=0, minf=1 00:18:55.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.919 issued rwts: total=7578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.919 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3537677: Thu Jul 25 18:52:07 2024 00:18:55.919 read: IOPS=1512, BW=6051KiB/s (6196kB/s)(21.8MiB/3681msec) 00:18:55.919 slat (usec): min=4, max=7063, avg=19.66, stdev=160.96 00:18:55.919 clat (usec): min=203, max=63342, avg=637.54, stdev=3674.02 00:18:55.919 lat (usec): min=209, max=63358, avg=655.94, stdev=3704.13 00:18:55.919 clat percentiles (usec): 00:18:55.919 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 251], 00:18:55.919 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 314], 00:18:55.919 | 70.00th=[ 330], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 445], 00:18:55.919 | 99.00th=[ 578], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:18:55.919 | 99.99th=[63177] 00:18:55.919 bw ( KiB/s): min= 352, max=13648, per=21.87%, avg=6324.29, stdev=5653.36, samples=7 00:18:55.919 iops : min= 88, max= 3412, avg=1581.00, stdev=1413.41, samples=7 00:18:55.919 lat (usec) : 250=20.09%, 500=77.75%, 750=1.33%, 1000=0.02% 00:18:55.919 lat (msec) : 50=0.77%, 100=0.02% 00:18:55.919 cpu : usr=1.03%, sys=2.93%, ctx=5572, majf=0, minf=1 00:18:55.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.919 issued rwts: total=5569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.919 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3537678: Thu Jul 25 18:52:07 2024 00:18:55.919 read: IOPS=2133, BW=8535KiB/s (8740kB/s)(26.6MiB/3187msec) 00:18:55.919 slat (usec): min=4, max=15138, avg=15.86, stdev=231.66 00:18:55.919 clat (usec): min=210, max=41983, avg=446.64, stdev=2613.04 00:18:55.919 lat (usec): min=217, max=41999, avg=462.50, stdev=2623.58 00:18:55.919 clat percentiles (usec): 00:18:55.919 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:18:55.919 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:18:55.919 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 343], 00:18:55.919 | 99.00th=[ 498], 99.50th=[ 791], 99.90th=[41157], 99.95th=[41157], 00:18:55.919 | 99.99th=[42206] 00:18:55.919 bw ( KiB/s): min= 1264, max=13776, per=30.80%, avg=8908.00, stdev=5504.60, samples=6 00:18:55.919 iops : min= 316, max= 3444, avg=2227.00, stdev=1376.15, samples=6 00:18:55.919 lat (usec) : 250=22.16%, 500=76.87%, 750=0.37%, 1000=0.15% 00:18:55.919 lat (msec) : 2=0.01%, 4=0.01%, 50=0.41% 00:18:55.919 cpu : usr=1.44%, sys=3.77%, ctx=6804, majf=0, minf=1 00:18:55.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.919 issued rwts: total=6801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.919 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3537679: Thu Jul 25 18:52:07 2024 00:18:55.919 read: IOPS=2295, BW=9180KiB/s (9400kB/s)(26.1MiB/2906msec) 00:18:55.919 slat (nsec): min=5758, max=53699, avg=11322.42, stdev=6112.26 00:18:55.919 clat (usec): min=217, max=42009, avg=417.37, stdev=2143.55 00:18:55.919 lat (usec): min=226, max=42025, avg=428.69, stdev=2144.18 00:18:55.919 clat percentiles (usec): 00:18:55.919 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 265], 00:18:55.919 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:18:55.919 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 367], 95.00th=[ 412], 00:18:55.919 | 99.00th=[ 529], 99.50th=[ 717], 99.90th=[42206], 99.95th=[42206], 00:18:55.919 | 99.99th=[42206] 00:18:55.919 bw ( KiB/s): min= 104, max=13320, per=29.93%, avg=8657.60, stdev=5448.27, samples=5 00:18:55.919 iops : min= 26, max= 3330, avg=2164.40, stdev=1362.07, samples=5 00:18:55.919 lat (usec) : 250=8.73%, 500=89.93%, 750=0.87%, 1000=0.19% 00:18:55.919 lat (msec) : 50=0.27% 00:18:55.919 cpu : usr=2.03%, sys=3.55%, ctx=6671, majf=0, minf=1 00:18:55.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.919 issued rwts: total=6670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.919 00:18:55.919 Run status group 0 (all jobs): 00:18:55.919 READ: bw=28.2MiB/s (29.6MB/s), 6051KiB/s-9180KiB/s (6196kB/s-9400kB/s), io=104MiB (109MB), run=2906-3681msec 00:18:55.919 00:18:55.919 Disk stats (read/write): 00:18:55.919 nvme0n1: ios=7384/0, merge=0/0, ticks=3144/0, in_queue=3144, util=94.79% 00:18:55.919 nvme0n2: ios=5606/0, merge=0/0, ticks=3540/0, in_queue=3540, util=99.49% 00:18:55.919 nvme0n3: ios=6846/0, merge=0/0, ticks=3080/0, in_queue=3080, util=98.63% 00:18:55.919 nvme0n4: ios=6633/0, merge=0/0, ticks=3652/0, in_queue=3652, util=99.49% 00:18:55.919 18:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:55.919 18:52:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:56.177 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:56.177 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:56.434 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:56.434 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:56.691 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:56.691 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:56.949 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:56.949 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3537585 00:18:56.949 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:56.949 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:57.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:57.206 nvmf hotplug test: fio failed as expected 00:18:57.206 18:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.464 rmmod nvme_tcp 00:18:57.464 rmmod nvme_fabrics 00:18:57.464 rmmod nvme_keyring 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3535556 ']' 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3535556 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3535556 ']' 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3535556 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3535556 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3535556' 00:18:57.464 killing process with pid 3535556 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3535556 00:18:57.464 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3535556 00:18:57.722 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.722 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.722 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.722 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.722 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.722 18:52:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.722 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.722 18:52:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.257 18:52:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:00.257 00:19:00.257 real 0m23.034s 00:19:00.257 user 1m19.489s 00:19:00.257 sys 0m7.434s 00:19:00.257 18:52:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:00.257 18:52:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.257 ************************************ 00:19:00.257 END TEST nvmf_fio_target 00:19:00.257 ************************************ 00:19:00.257 18:52:11 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:00.257 18:52:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:00.257 18:52:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:00.257 18:52:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:00.257 ************************************ 00:19:00.257 START TEST nvmf_bdevio 00:19:00.257 ************************************ 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:00.257 * Looking for test storage... 00:19:00.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.257 18:52:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:02.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:02.162 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:02.162 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:02.162 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:02.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:19:02.162 00:19:02.162 --- 10.0.0.2 ping statistics --- 00:19:02.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.162 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:19:02.162 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:19:02.162 00:19:02.162 --- 10.0.0.1 ping statistics --- 00:19:02.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.163 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3540283 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3540283 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3540283 ']' 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:02.163 18:52:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.163 [2024-07-25 18:52:13.785729] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:02.163 [2024-07-25 18:52:13.785807] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.163 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.163 [2024-07-25 18:52:13.852522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:02.163 [2024-07-25 18:52:13.941550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.163 [2024-07-25 18:52:13.941624] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.163 [2024-07-25 18:52:13.941638] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.163 [2024-07-25 18:52:13.941648] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.163 [2024-07-25 18:52:13.941658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.163 [2024-07-25 18:52:13.941746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:02.163 [2024-07-25 18:52:13.941792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:02.163 [2024-07-25 18:52:13.941875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:02.163 [2024-07-25 18:52:13.941877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.424 [2024-07-25 18:52:14.103887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.424 Malloc0 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:02.424 [2024-07-25 18:52:14.157523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:02.424 { 00:19:02.424 "params": { 00:19:02.424 "name": "Nvme$subsystem", 00:19:02.424 "trtype": "$TEST_TRANSPORT", 00:19:02.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.424 "adrfam": "ipv4", 00:19:02.424 "trsvcid": "$NVMF_PORT", 00:19:02.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.424 "hdgst": ${hdgst:-false}, 00:19:02.424 "ddgst": ${ddgst:-false} 00:19:02.424 }, 00:19:02.424 "method": "bdev_nvme_attach_controller" 00:19:02.424 } 00:19:02.424 EOF 00:19:02.424 )") 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:02.424 18:52:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:02.424 "params": { 00:19:02.424 "name": "Nvme1", 00:19:02.424 "trtype": "tcp", 00:19:02.424 "traddr": "10.0.0.2", 00:19:02.424 "adrfam": "ipv4", 00:19:02.424 "trsvcid": "4420", 00:19:02.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.424 "hdgst": false, 00:19:02.424 "ddgst": false 00:19:02.424 }, 00:19:02.424 "method": "bdev_nvme_attach_controller" 00:19:02.424 }' 00:19:02.424 [2024-07-25 18:52:14.204795] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:02.424 [2024-07-25 18:52:14.204879] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3540313 ] 00:19:02.424 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.424 [2024-07-25 18:52:14.265235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.682 [2024-07-25 18:52:14.356165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.682 [2024-07-25 18:52:14.356216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.682 [2024-07-25 18:52:14.356220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.941 I/O targets: 00:19:02.941 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:02.941 00:19:02.941 00:19:02.941 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.941 http://cunit.sourceforge.net/ 00:19:02.941 00:19:02.941 00:19:02.941 Suite: bdevio tests on: Nvme1n1 00:19:02.941 Test: blockdev write read block ...passed 00:19:02.941 Test: blockdev write zeroes read block ...passed 00:19:02.941 Test: blockdev write zeroes read no split ...passed 00:19:02.941 Test: blockdev write zeroes read split ...passed 00:19:02.941 Test: blockdev write zeroes read split partial ...passed 00:19:02.941 Test: blockdev reset ...[2024-07-25 18:52:14.766891] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.941 [2024-07-25 18:52:14.766996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d7a00 (9): Bad file descriptor 00:19:02.941 [2024-07-25 18:52:14.779356] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:02.941 passed 00:19:02.941 Test: blockdev write read 8 blocks ...passed 00:19:02.941 Test: blockdev write read size > 128k ...passed 00:19:02.941 Test: blockdev write read invalid size ...passed 00:19:03.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.199 Test: blockdev write read max offset ...passed 00:19:03.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.199 Test: blockdev writev readv 8 blocks ...passed 00:19:03.199 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.199 Test: blockdev writev readv block ...passed 00:19:03.199 Test: blockdev writev readv size > 128k ...passed 00:19:03.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.199 Test: blockdev comparev and writev ...[2024-07-25 18:52:14.993973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.199 [2024-07-25 18:52:14.994010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.199 [2024-07-25 18:52:14.994033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.199 [2024-07-25 18:52:14.994050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:03.199 [2024-07-25 18:52:14.994407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.199 [2024-07-25 18:52:14.994435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:03.199 [2024-07-25 18:52:14.994457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.199 [2024-07-25 18:52:14.994474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:03.199 [2024-07-25 18:52:14.994829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.199 [2024-07-25 18:52:14.994854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:03.199 [2024-07-25 18:52:14.994876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.199 [2024-07-25 18:52:14.994893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:03.199 [2024-07-25 18:52:14.995264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.199 [2024-07-25 18:52:14.995289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:03.199 [2024-07-25 18:52:14.995311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.199 [2024-07-25 18:52:14.995327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:03.199 passed 00:19:03.459 Test: blockdev nvme passthru rw ...passed 00:19:03.459 Test: blockdev nvme passthru vendor specific ...[2024-07-25 18:52:15.078354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.459 [2024-07-25 18:52:15.078382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:03.459 [2024-07-25 18:52:15.078541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.459 [2024-07-25 18:52:15.078565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:03.459 [2024-07-25 18:52:15.078718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.459 [2024-07-25 18:52:15.078741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:03.459 [2024-07-25 18:52:15.078899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.459 [2024-07-25 18:52:15.078923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:03.459 passed 00:19:03.459 Test: blockdev nvme admin passthru ...passed 00:19:03.459 Test: blockdev copy ...passed 00:19:03.459 00:19:03.459 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.459 suites 1 1 n/a 0 0 00:19:03.459 tests 23 23 23 0 0 00:19:03.459 asserts 152 152 152 0 n/a 00:19:03.459 00:19:03.459 Elapsed time = 1.140 seconds 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.719 rmmod nvme_tcp 00:19:03.719 rmmod nvme_fabrics 00:19:03.719 rmmod nvme_keyring 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3540283 ']' 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3540283 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3540283 ']' 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3540283 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3540283 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3540283' 00:19:03.719 killing process with pid 3540283 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3540283 00:19:03.719 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3540283 00:19:03.979 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.979 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.979 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.979 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.979 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.979 18:52:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.979 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.979 18:52:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.882 18:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:05.882 00:19:05.882 real 0m6.155s 00:19:05.882 user 0m9.795s 00:19:05.882 sys 0m2.012s 00:19:05.882 18:52:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:05.882 18:52:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.882 ************************************ 00:19:05.882 END TEST nvmf_bdevio 00:19:05.882 ************************************ 00:19:06.140 18:52:17 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:06.140 18:52:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:06.140 18:52:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:06.140 18:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.140 ************************************ 00:19:06.140 START TEST nvmf_auth_target 00:19:06.140 ************************************ 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:06.140 * Looking for test storage... 00:19:06.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.140 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.141 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:08.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:08.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:08.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:08.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.045 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:08.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:19:08.046 00:19:08.046 --- 10.0.0.2 ping statistics --- 00:19:08.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.046 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:08.046 00:19:08.046 --- 10.0.0.1 ping statistics --- 00:19:08.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.046 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3542380 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3542380 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3542380 ']' 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:08.046 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3542406 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=89067d53f0526c029d7243a0d24e9a499bdb4f5177b01654 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hAX 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 89067d53f0526c029d7243a0d24e9a499bdb4f5177b01654 0 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 89067d53f0526c029d7243a0d24e9a499bdb4f5177b01654 0 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=89067d53f0526c029d7243a0d24e9a499bdb4f5177b01654 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hAX 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hAX 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.hAX 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bdc09fd5fa2f3ce2e9e3c7172e796628b131163e2a302fb0563a5f39da9a3196 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xZg 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bdc09fd5fa2f3ce2e9e3c7172e796628b131163e2a302fb0563a5f39da9a3196 3 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bdc09fd5fa2f3ce2e9e3c7172e796628b131163e2a302fb0563a5f39da9a3196 3 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bdc09fd5fa2f3ce2e9e3c7172e796628b131163e2a302fb0563a5f39da9a3196 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xZg 00:19:08.614 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xZg 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.xZg 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=89219c41c04e9a6c40628feab9b7711b 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JBb 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 89219c41c04e9a6c40628feab9b7711b 1 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 89219c41c04e9a6c40628feab9b7711b 1 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=89219c41c04e9a6c40628feab9b7711b 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JBb 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JBb 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.JBb 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=02b913d8e8459fcdd54cd5e02be272b2ceeb6aa179ea6331 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.m67 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 02b913d8e8459fcdd54cd5e02be272b2ceeb6aa179ea6331 2 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 02b913d8e8459fcdd54cd5e02be272b2ceeb6aa179ea6331 2 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=02b913d8e8459fcdd54cd5e02be272b2ceeb6aa179ea6331 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.m67 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.m67 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.m67 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=967274f54a8f9c3c413e72e1989b2c3f9f598def51c83ff0 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sGP 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 967274f54a8f9c3c413e72e1989b2c3f9f598def51c83ff0 2 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 967274f54a8f9c3c413e72e1989b2c3f9f598def51c83ff0 2 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=967274f54a8f9c3c413e72e1989b2c3f9f598def51c83ff0 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:08.615 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sGP 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sGP 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.sGP 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b76157500188beee0a1bca79cbcf28e5 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.KA2 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b76157500188beee0a1bca79cbcf28e5 1 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b76157500188beee0a1bca79cbcf28e5 1 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b76157500188beee0a1bca79cbcf28e5 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.KA2 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.KA2 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.KA2 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e3419031b5579777055311d3c9167f763cd5a48dcfc697740ce594714275f9cc 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KIF 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e3419031b5579777055311d3c9167f763cd5a48dcfc697740ce594714275f9cc 3 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e3419031b5579777055311d3c9167f763cd5a48dcfc697740ce594714275f9cc 3 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e3419031b5579777055311d3c9167f763cd5a48dcfc697740ce594714275f9cc 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KIF 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KIF 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.KIF 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3542380 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3542380 ']' 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:08.873 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3542406 /var/tmp/host.sock 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3542406 ']' 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:09.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:09.131 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hAX 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hAX 00:19:09.388 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hAX 00:19:09.646 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.xZg ]] 00:19:09.646 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xZg 00:19:09.646 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.646 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.646 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.646 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xZg 00:19:09.646 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xZg 00:19:09.904 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:09.904 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JBb 00:19:09.904 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.904 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.904 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.904 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JBb 00:19:09.904 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JBb 00:19:10.199 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.m67 ]] 00:19:10.199 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.m67 00:19:10.199 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.199 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.199 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.199 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.m67 00:19:10.199 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.m67 00:19:10.456 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:10.456 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sGP 00:19:10.456 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.456 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.456 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.456 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.sGP 00:19:10.456 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.sGP 00:19:10.714 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.KA2 ]] 00:19:10.714 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KA2 00:19:10.714 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.714 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.714 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.714 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KA2 00:19:10.714 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KA2 00:19:10.971 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:10.971 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KIF 00:19:10.971 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.971 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.971 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.971 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.KIF 00:19:10.971 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.KIF 00:19:11.229 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:11.229 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:11.229 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.229 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.229 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.229 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.540 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.797 00:19:11.797 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.797 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.797 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.054 { 00:19:12.054 "cntlid": 1, 00:19:12.054 "qid": 0, 00:19:12.054 "state": "enabled", 00:19:12.054 "listen_address": { 00:19:12.054 "trtype": "TCP", 00:19:12.054 "adrfam": "IPv4", 00:19:12.054 "traddr": "10.0.0.2", 00:19:12.054 "trsvcid": "4420" 00:19:12.054 }, 00:19:12.054 "peer_address": { 00:19:12.054 "trtype": "TCP", 00:19:12.054 "adrfam": "IPv4", 00:19:12.054 "traddr": "10.0.0.1", 00:19:12.054 "trsvcid": "59158" 00:19:12.054 }, 00:19:12.054 "auth": { 00:19:12.054 "state": "completed", 00:19:12.054 "digest": "sha256", 00:19:12.054 "dhgroup": "null" 00:19:12.054 } 00:19:12.054 } 00:19:12.054 ]' 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.054 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.312 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:19:13.247 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.248 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.248 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.248 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.248 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.248 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.248 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.248 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.813 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.813 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.071 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.071 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.071 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.071 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.328 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.329 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.329 { 00:19:14.329 "cntlid": 3, 00:19:14.329 "qid": 0, 00:19:14.329 "state": "enabled", 00:19:14.329 "listen_address": { 00:19:14.329 "trtype": "TCP", 00:19:14.329 "adrfam": "IPv4", 00:19:14.329 "traddr": "10.0.0.2", 00:19:14.329 "trsvcid": "4420" 00:19:14.329 }, 00:19:14.329 "peer_address": { 00:19:14.329 "trtype": "TCP", 00:19:14.329 "adrfam": "IPv4", 00:19:14.329 "traddr": "10.0.0.1", 00:19:14.329 "trsvcid": "59186" 00:19:14.329 }, 00:19:14.329 "auth": { 00:19:14.329 "state": "completed", 00:19:14.329 "digest": "sha256", 00:19:14.329 "dhgroup": "null" 00:19:14.329 } 00:19:14.329 } 00:19:14.329 ]' 00:19:14.329 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.329 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.329 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.329 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.329 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.329 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.329 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.329 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.586 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:19:15.519 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.519 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.519 18:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.519 18:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.519 18:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.519 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.519 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.519 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.777 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.034 00:19:16.034 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.034 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.034 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.292 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.292 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.292 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.292 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.292 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.292 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.292 { 00:19:16.292 "cntlid": 5, 00:19:16.292 "qid": 0, 00:19:16.292 "state": "enabled", 00:19:16.292 "listen_address": { 00:19:16.292 "trtype": "TCP", 00:19:16.292 "adrfam": "IPv4", 00:19:16.292 "traddr": "10.0.0.2", 00:19:16.292 "trsvcid": "4420" 00:19:16.292 }, 00:19:16.292 "peer_address": { 00:19:16.292 "trtype": "TCP", 00:19:16.292 "adrfam": "IPv4", 00:19:16.292 "traddr": "10.0.0.1", 00:19:16.292 "trsvcid": "59210" 00:19:16.292 }, 00:19:16.292 "auth": { 00:19:16.292 "state": "completed", 00:19:16.292 "digest": "sha256", 00:19:16.292 "dhgroup": "null" 00:19:16.292 } 00:19:16.292 } 00:19:16.292 ]' 00:19:16.292 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.549 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.549 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.549 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:16.549 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.549 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.549 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.549 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.806 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:19:17.741 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.742 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.742 18:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.742 18:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.742 18:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.742 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.742 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.742 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.000 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.258 00:19:18.258 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.258 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.258 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.516 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.516 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.516 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.516 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.516 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.516 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.516 { 00:19:18.516 "cntlid": 7, 00:19:18.516 "qid": 0, 00:19:18.516 "state": "enabled", 00:19:18.516 "listen_address": { 00:19:18.516 "trtype": "TCP", 00:19:18.516 "adrfam": "IPv4", 00:19:18.516 "traddr": "10.0.0.2", 00:19:18.516 "trsvcid": "4420" 00:19:18.516 }, 00:19:18.516 "peer_address": { 00:19:18.516 "trtype": "TCP", 00:19:18.516 "adrfam": "IPv4", 00:19:18.516 "traddr": "10.0.0.1", 00:19:18.516 "trsvcid": "57508" 00:19:18.516 }, 00:19:18.516 "auth": { 00:19:18.516 "state": "completed", 00:19:18.516 "digest": "sha256", 00:19:18.516 "dhgroup": "null" 00:19:18.516 } 00:19:18.516 } 00:19:18.516 ]' 00:19:18.516 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.773 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.773 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.773 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:18.773 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.773 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.773 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.773 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.030 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.967 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.223 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.790 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.790 { 00:19:20.790 "cntlid": 9, 00:19:20.790 "qid": 0, 00:19:20.790 "state": "enabled", 00:19:20.790 "listen_address": { 00:19:20.790 "trtype": "TCP", 00:19:20.790 "adrfam": "IPv4", 00:19:20.790 "traddr": "10.0.0.2", 00:19:20.790 "trsvcid": "4420" 00:19:20.790 }, 00:19:20.790 "peer_address": { 00:19:20.790 "trtype": "TCP", 00:19:20.790 "adrfam": "IPv4", 00:19:20.790 "traddr": "10.0.0.1", 00:19:20.790 "trsvcid": "57526" 00:19:20.790 }, 00:19:20.790 "auth": { 00:19:20.790 "state": "completed", 00:19:20.790 "digest": "sha256", 00:19:20.790 "dhgroup": "ffdhe2048" 00:19:20.790 } 00:19:20.790 } 00:19:20.790 ]' 00:19:20.790 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.048 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.048 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.048 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:21.048 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.048 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.048 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.048 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.306 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:19:22.248 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.248 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.248 18:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.248 18:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.248 18:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.248 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.248 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.248 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.506 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.764 00:19:22.764 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.764 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.764 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.021 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.021 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.021 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.021 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.021 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.021 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.021 { 00:19:23.021 "cntlid": 11, 00:19:23.021 "qid": 0, 00:19:23.021 "state": "enabled", 00:19:23.021 "listen_address": { 00:19:23.021 "trtype": "TCP", 00:19:23.021 "adrfam": "IPv4", 00:19:23.021 "traddr": "10.0.0.2", 00:19:23.021 "trsvcid": "4420" 00:19:23.021 }, 00:19:23.021 "peer_address": { 00:19:23.021 "trtype": "TCP", 00:19:23.021 "adrfam": "IPv4", 00:19:23.021 "traddr": "10.0.0.1", 00:19:23.021 "trsvcid": "57550" 00:19:23.021 }, 00:19:23.021 "auth": { 00:19:23.021 "state": "completed", 00:19:23.021 "digest": "sha256", 00:19:23.021 "dhgroup": "ffdhe2048" 00:19:23.021 } 00:19:23.021 } 00:19:23.021 ]' 00:19:23.021 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.280 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.280 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.280 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.280 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.280 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.280 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.280 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.537 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:19:24.472 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.472 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.472 18:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.472 18:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.472 18:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.472 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.472 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.472 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.729 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.296 00:19:25.296 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.296 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.296 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.555 { 00:19:25.555 "cntlid": 13, 00:19:25.555 "qid": 0, 00:19:25.555 "state": "enabled", 00:19:25.555 "listen_address": { 00:19:25.555 "trtype": "TCP", 00:19:25.555 "adrfam": "IPv4", 00:19:25.555 "traddr": "10.0.0.2", 00:19:25.555 "trsvcid": "4420" 00:19:25.555 }, 00:19:25.555 "peer_address": { 00:19:25.555 "trtype": "TCP", 00:19:25.555 "adrfam": "IPv4", 00:19:25.555 "traddr": "10.0.0.1", 00:19:25.555 "trsvcid": "57574" 00:19:25.555 }, 00:19:25.555 "auth": { 00:19:25.555 "state": "completed", 00:19:25.555 "digest": "sha256", 00:19:25.555 "dhgroup": "ffdhe2048" 00:19:25.555 } 00:19:25.555 } 00:19:25.555 ]' 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.555 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.813 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:19:26.801 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.801 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.801 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.801 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.801 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.801 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.801 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.801 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.058 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.624 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.624 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.624 { 00:19:27.624 "cntlid": 15, 00:19:27.624 "qid": 0, 00:19:27.624 "state": "enabled", 00:19:27.624 "listen_address": { 00:19:27.624 "trtype": "TCP", 00:19:27.624 "adrfam": "IPv4", 00:19:27.624 "traddr": "10.0.0.2", 00:19:27.624 "trsvcid": "4420" 00:19:27.624 }, 00:19:27.624 "peer_address": { 00:19:27.625 "trtype": "TCP", 00:19:27.625 "adrfam": "IPv4", 00:19:27.625 "traddr": "10.0.0.1", 00:19:27.625 "trsvcid": "57594" 00:19:27.625 }, 00:19:27.625 "auth": { 00:19:27.625 "state": "completed", 00:19:27.625 "digest": "sha256", 00:19:27.625 "dhgroup": "ffdhe2048" 00:19:27.625 } 00:19:27.625 } 00:19:27.625 ]' 00:19:27.625 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.882 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.882 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.882 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.882 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.882 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.882 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.882 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.141 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.077 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.335 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.593 00:19:29.593 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.593 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.593 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.851 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.851 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.851 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.851 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.851 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.851 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.851 { 00:19:29.851 "cntlid": 17, 00:19:29.851 "qid": 0, 00:19:29.851 "state": "enabled", 00:19:29.851 "listen_address": { 00:19:29.851 "trtype": "TCP", 00:19:29.851 "adrfam": "IPv4", 00:19:29.851 "traddr": "10.0.0.2", 00:19:29.851 "trsvcid": "4420" 00:19:29.851 }, 00:19:29.851 "peer_address": { 00:19:29.851 "trtype": "TCP", 00:19:29.851 "adrfam": "IPv4", 00:19:29.851 "traddr": "10.0.0.1", 00:19:29.851 "trsvcid": "42364" 00:19:29.851 }, 00:19:29.851 "auth": { 00:19:29.851 "state": "completed", 00:19:29.851 "digest": "sha256", 00:19:29.851 "dhgroup": "ffdhe3072" 00:19:29.851 } 00:19:29.851 } 00:19:29.851 ]' 00:19:29.851 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.110 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.110 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.110 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:30.110 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.110 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.110 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.110 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.368 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:19:31.304 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.304 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.304 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.304 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.304 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.304 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.304 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.304 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.563 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.820 00:19:31.820 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.820 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.820 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.076 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.076 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.076 18:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.076 18:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.076 18:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.076 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.076 { 00:19:32.076 "cntlid": 19, 00:19:32.076 "qid": 0, 00:19:32.076 "state": "enabled", 00:19:32.076 "listen_address": { 00:19:32.076 "trtype": "TCP", 00:19:32.076 "adrfam": "IPv4", 00:19:32.076 "traddr": "10.0.0.2", 00:19:32.076 "trsvcid": "4420" 00:19:32.076 }, 00:19:32.076 "peer_address": { 00:19:32.076 "trtype": "TCP", 00:19:32.076 "adrfam": "IPv4", 00:19:32.076 "traddr": "10.0.0.1", 00:19:32.076 "trsvcid": "42402" 00:19:32.076 }, 00:19:32.076 "auth": { 00:19:32.076 "state": "completed", 00:19:32.076 "digest": "sha256", 00:19:32.076 "dhgroup": "ffdhe3072" 00:19:32.076 } 00:19:32.076 } 00:19:32.076 ]' 00:19:32.076 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.333 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.333 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.333 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:32.333 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.333 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.333 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.333 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.590 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:19:33.531 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.531 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.531 18:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.531 18:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.531 18:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.531 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.531 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.532 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.789 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.047 00:19:34.306 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.306 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.307 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.565 { 00:19:34.565 "cntlid": 21, 00:19:34.565 "qid": 0, 00:19:34.565 "state": "enabled", 00:19:34.565 "listen_address": { 00:19:34.565 "trtype": "TCP", 00:19:34.565 "adrfam": "IPv4", 00:19:34.565 "traddr": "10.0.0.2", 00:19:34.565 "trsvcid": "4420" 00:19:34.565 }, 00:19:34.565 "peer_address": { 00:19:34.565 "trtype": "TCP", 00:19:34.565 "adrfam": "IPv4", 00:19:34.565 "traddr": "10.0.0.1", 00:19:34.565 "trsvcid": "42432" 00:19:34.565 }, 00:19:34.565 "auth": { 00:19:34.565 "state": "completed", 00:19:34.565 "digest": "sha256", 00:19:34.565 "dhgroup": "ffdhe3072" 00:19:34.565 } 00:19:34.565 } 00:19:34.565 ]' 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.565 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.822 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:19:35.761 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.761 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.761 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.761 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.761 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.761 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.761 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.761 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.019 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.588 00:19:36.588 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.588 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.588 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.846 { 00:19:36.846 "cntlid": 23, 00:19:36.846 "qid": 0, 00:19:36.846 "state": "enabled", 00:19:36.846 "listen_address": { 00:19:36.846 "trtype": "TCP", 00:19:36.846 "adrfam": "IPv4", 00:19:36.846 "traddr": "10.0.0.2", 00:19:36.846 "trsvcid": "4420" 00:19:36.846 }, 00:19:36.846 "peer_address": { 00:19:36.846 "trtype": "TCP", 00:19:36.846 "adrfam": "IPv4", 00:19:36.846 "traddr": "10.0.0.1", 00:19:36.846 "trsvcid": "42458" 00:19:36.846 }, 00:19:36.846 "auth": { 00:19:36.846 "state": "completed", 00:19:36.846 "digest": "sha256", 00:19:36.846 "dhgroup": "ffdhe3072" 00:19:36.846 } 00:19:36.846 } 00:19:36.846 ]' 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.846 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.104 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:38.038 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:38.296 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.297 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.862 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.862 { 00:19:38.862 "cntlid": 25, 00:19:38.862 "qid": 0, 00:19:38.862 "state": "enabled", 00:19:38.862 "listen_address": { 00:19:38.862 "trtype": "TCP", 00:19:38.862 "adrfam": "IPv4", 00:19:38.862 "traddr": "10.0.0.2", 00:19:38.862 "trsvcid": "4420" 00:19:38.862 }, 00:19:38.862 "peer_address": { 00:19:38.862 "trtype": "TCP", 00:19:38.862 "adrfam": "IPv4", 00:19:38.862 "traddr": "10.0.0.1", 00:19:38.862 "trsvcid": "33398" 00:19:38.862 }, 00:19:38.862 "auth": { 00:19:38.862 "state": "completed", 00:19:38.862 "digest": "sha256", 00:19:38.862 "dhgroup": "ffdhe4096" 00:19:38.862 } 00:19:38.862 } 00:19:38.862 ]' 00:19:38.862 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.120 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.120 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.120 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:39.120 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.120 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.120 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.120 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.379 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:19:40.311 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.311 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.311 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.311 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.311 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.311 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.311 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.311 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.569 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.827 00:19:41.085 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.085 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.085 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.085 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.085 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.085 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.085 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.343 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.343 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.343 { 00:19:41.343 "cntlid": 27, 00:19:41.343 "qid": 0, 00:19:41.343 "state": "enabled", 00:19:41.343 "listen_address": { 00:19:41.343 "trtype": "TCP", 00:19:41.343 "adrfam": "IPv4", 00:19:41.343 "traddr": "10.0.0.2", 00:19:41.343 "trsvcid": "4420" 00:19:41.343 }, 00:19:41.343 "peer_address": { 00:19:41.343 "trtype": "TCP", 00:19:41.343 "adrfam": "IPv4", 00:19:41.343 "traddr": "10.0.0.1", 00:19:41.343 "trsvcid": "33428" 00:19:41.343 }, 00:19:41.343 "auth": { 00:19:41.343 "state": "completed", 00:19:41.343 "digest": "sha256", 00:19:41.343 "dhgroup": "ffdhe4096" 00:19:41.343 } 00:19:41.343 } 00:19:41.343 ]' 00:19:41.343 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.343 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.343 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.343 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:41.343 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.343 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.344 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.344 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.607 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:19:42.550 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.550 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.550 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.550 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.550 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.550 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.550 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.550 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.857 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.115 00:19:43.373 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.373 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.373 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.631 { 00:19:43.631 "cntlid": 29, 00:19:43.631 "qid": 0, 00:19:43.631 "state": "enabled", 00:19:43.631 "listen_address": { 00:19:43.631 "trtype": "TCP", 00:19:43.631 "adrfam": "IPv4", 00:19:43.631 "traddr": "10.0.0.2", 00:19:43.631 "trsvcid": "4420" 00:19:43.631 }, 00:19:43.631 "peer_address": { 00:19:43.631 "trtype": "TCP", 00:19:43.631 "adrfam": "IPv4", 00:19:43.631 "traddr": "10.0.0.1", 00:19:43.631 "trsvcid": "33444" 00:19:43.631 }, 00:19:43.631 "auth": { 00:19:43.631 "state": "completed", 00:19:43.631 "digest": "sha256", 00:19:43.631 "dhgroup": "ffdhe4096" 00:19:43.631 } 00:19:43.631 } 00:19:43.631 ]' 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.631 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.889 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:19:44.822 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.822 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.822 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.822 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.822 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.822 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.822 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.822 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.080 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.645 00:19:45.645 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.645 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.645 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.903 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.904 { 00:19:45.904 "cntlid": 31, 00:19:45.904 "qid": 0, 00:19:45.904 "state": "enabled", 00:19:45.904 "listen_address": { 00:19:45.904 "trtype": "TCP", 00:19:45.904 "adrfam": "IPv4", 00:19:45.904 "traddr": "10.0.0.2", 00:19:45.904 "trsvcid": "4420" 00:19:45.904 }, 00:19:45.904 "peer_address": { 00:19:45.904 "trtype": "TCP", 00:19:45.904 "adrfam": "IPv4", 00:19:45.904 "traddr": "10.0.0.1", 00:19:45.904 "trsvcid": "33472" 00:19:45.904 }, 00:19:45.904 "auth": { 00:19:45.904 "state": "completed", 00:19:45.904 "digest": "sha256", 00:19:45.904 "dhgroup": "ffdhe4096" 00:19:45.904 } 00:19:45.904 } 00:19:45.904 ]' 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.904 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.162 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.094 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.352 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.916 00:19:47.916 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.916 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.916 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.173 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.173 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.174 18:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.174 18:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.174 18:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.174 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.174 { 00:19:48.174 "cntlid": 33, 00:19:48.174 "qid": 0, 00:19:48.174 "state": "enabled", 00:19:48.174 "listen_address": { 00:19:48.174 "trtype": "TCP", 00:19:48.174 "adrfam": "IPv4", 00:19:48.174 "traddr": "10.0.0.2", 00:19:48.174 "trsvcid": "4420" 00:19:48.174 }, 00:19:48.174 "peer_address": { 00:19:48.174 "trtype": "TCP", 00:19:48.174 "adrfam": "IPv4", 00:19:48.174 "traddr": "10.0.0.1", 00:19:48.174 "trsvcid": "55630" 00:19:48.174 }, 00:19:48.174 "auth": { 00:19:48.174 "state": "completed", 00:19:48.174 "digest": "sha256", 00:19:48.174 "dhgroup": "ffdhe6144" 00:19:48.174 } 00:19:48.174 } 00:19:48.174 ]' 00:19:48.174 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.174 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.174 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.431 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.431 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.431 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.431 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.431 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.688 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:19:49.620 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.620 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.620 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.620 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.620 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.620 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.620 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.620 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.877 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.878 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.443 00:19:50.443 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.443 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.443 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.701 { 00:19:50.701 "cntlid": 35, 00:19:50.701 "qid": 0, 00:19:50.701 "state": "enabled", 00:19:50.701 "listen_address": { 00:19:50.701 "trtype": "TCP", 00:19:50.701 "adrfam": "IPv4", 00:19:50.701 "traddr": "10.0.0.2", 00:19:50.701 "trsvcid": "4420" 00:19:50.701 }, 00:19:50.701 "peer_address": { 00:19:50.701 "trtype": "TCP", 00:19:50.701 "adrfam": "IPv4", 00:19:50.701 "traddr": "10.0.0.1", 00:19:50.701 "trsvcid": "55672" 00:19:50.701 }, 00:19:50.701 "auth": { 00:19:50.701 "state": "completed", 00:19:50.701 "digest": "sha256", 00:19:50.701 "dhgroup": "ffdhe6144" 00:19:50.701 } 00:19:50.701 } 00:19:50.701 ]' 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.701 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.960 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:19:52.333 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.333 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.333 18:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.333 18:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.333 18:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.333 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.333 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:52.333 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.333 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.899 00:19:52.899 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.899 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.899 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.157 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.157 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.157 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.157 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.157 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.157 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.157 { 00:19:53.157 "cntlid": 37, 00:19:53.157 "qid": 0, 00:19:53.157 "state": "enabled", 00:19:53.157 "listen_address": { 00:19:53.157 "trtype": "TCP", 00:19:53.157 "adrfam": "IPv4", 00:19:53.157 "traddr": "10.0.0.2", 00:19:53.157 "trsvcid": "4420" 00:19:53.157 }, 00:19:53.157 "peer_address": { 00:19:53.157 "trtype": "TCP", 00:19:53.157 "adrfam": "IPv4", 00:19:53.157 "traddr": "10.0.0.1", 00:19:53.157 "trsvcid": "55698" 00:19:53.157 }, 00:19:53.157 "auth": { 00:19:53.157 "state": "completed", 00:19:53.157 "digest": "sha256", 00:19:53.157 "dhgroup": "ffdhe6144" 00:19:53.157 } 00:19:53.157 } 00:19:53.157 ]' 00:19:53.157 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.157 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.157 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.415 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.415 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.415 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.415 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.415 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.672 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:19:54.605 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.605 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.605 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.605 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.605 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.605 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.605 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.605 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.863 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.428 00:19:55.428 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.428 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.428 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.686 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.687 { 00:19:55.687 "cntlid": 39, 00:19:55.687 "qid": 0, 00:19:55.687 "state": "enabled", 00:19:55.687 "listen_address": { 00:19:55.687 "trtype": "TCP", 00:19:55.687 "adrfam": "IPv4", 00:19:55.687 "traddr": "10.0.0.2", 00:19:55.687 "trsvcid": "4420" 00:19:55.687 }, 00:19:55.687 "peer_address": { 00:19:55.687 "trtype": "TCP", 00:19:55.687 "adrfam": "IPv4", 00:19:55.687 "traddr": "10.0.0.1", 00:19:55.687 "trsvcid": "55732" 00:19:55.687 }, 00:19:55.687 "auth": { 00:19:55.687 "state": "completed", 00:19:55.687 "digest": "sha256", 00:19:55.687 "dhgroup": "ffdhe6144" 00:19:55.687 } 00:19:55.687 } 00:19:55.687 ]' 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.687 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.945 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.879 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.444 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.009 00:19:58.009 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.009 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.009 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.266 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.267 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.267 18:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.267 18:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.267 18:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.267 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.267 { 00:19:58.267 "cntlid": 41, 00:19:58.267 "qid": 0, 00:19:58.267 "state": "enabled", 00:19:58.267 "listen_address": { 00:19:58.267 "trtype": "TCP", 00:19:58.267 "adrfam": "IPv4", 00:19:58.267 "traddr": "10.0.0.2", 00:19:58.267 "trsvcid": "4420" 00:19:58.267 }, 00:19:58.267 "peer_address": { 00:19:58.267 "trtype": "TCP", 00:19:58.267 "adrfam": "IPv4", 00:19:58.267 "traddr": "10.0.0.1", 00:19:58.267 "trsvcid": "59690" 00:19:58.267 }, 00:19:58.267 "auth": { 00:19:58.267 "state": "completed", 00:19:58.267 "digest": "sha256", 00:19:58.267 "dhgroup": "ffdhe8192" 00:19:58.267 } 00:19:58.267 } 00:19:58.267 ]' 00:19:58.267 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.524 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.524 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.524 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.524 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.524 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.524 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.524 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.781 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:19:59.763 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.763 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.763 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.763 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.763 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.763 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.763 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.763 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.021 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.955 00:20:00.955 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.955 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.955 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.212 { 00:20:01.212 "cntlid": 43, 00:20:01.212 "qid": 0, 00:20:01.212 "state": "enabled", 00:20:01.212 "listen_address": { 00:20:01.212 "trtype": "TCP", 00:20:01.212 "adrfam": "IPv4", 00:20:01.212 "traddr": "10.0.0.2", 00:20:01.212 "trsvcid": "4420" 00:20:01.212 }, 00:20:01.212 "peer_address": { 00:20:01.212 "trtype": "TCP", 00:20:01.212 "adrfam": "IPv4", 00:20:01.212 "traddr": "10.0.0.1", 00:20:01.212 "trsvcid": "59720" 00:20:01.212 }, 00:20:01.212 "auth": { 00:20:01.212 "state": "completed", 00:20:01.212 "digest": "sha256", 00:20:01.212 "dhgroup": "ffdhe8192" 00:20:01.212 } 00:20:01.212 } 00:20:01.212 ]' 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.212 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.212 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.212 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.212 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.212 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.212 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.470 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:20:02.405 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.405 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.405 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.405 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.405 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.405 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.405 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.405 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.663 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.604 00:20:03.604 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.604 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.604 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.862 { 00:20:03.862 "cntlid": 45, 00:20:03.862 "qid": 0, 00:20:03.862 "state": "enabled", 00:20:03.862 "listen_address": { 00:20:03.862 "trtype": "TCP", 00:20:03.862 "adrfam": "IPv4", 00:20:03.862 "traddr": "10.0.0.2", 00:20:03.862 "trsvcid": "4420" 00:20:03.862 }, 00:20:03.862 "peer_address": { 00:20:03.862 "trtype": "TCP", 00:20:03.862 "adrfam": "IPv4", 00:20:03.862 "traddr": "10.0.0.1", 00:20:03.862 "trsvcid": "59752" 00:20:03.862 }, 00:20:03.862 "auth": { 00:20:03.862 "state": "completed", 00:20:03.862 "digest": "sha256", 00:20:03.862 "dhgroup": "ffdhe8192" 00:20:03.862 } 00:20:03.862 } 00:20:03.862 ]' 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.862 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.120 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.120 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.120 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.120 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.120 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.379 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:20:05.315 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.315 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.315 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.315 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.315 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.315 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.315 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.315 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.571 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.508 00:20:06.508 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.508 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.508 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.766 { 00:20:06.766 "cntlid": 47, 00:20:06.766 "qid": 0, 00:20:06.766 "state": "enabled", 00:20:06.766 "listen_address": { 00:20:06.766 "trtype": "TCP", 00:20:06.766 "adrfam": "IPv4", 00:20:06.766 "traddr": "10.0.0.2", 00:20:06.766 "trsvcid": "4420" 00:20:06.766 }, 00:20:06.766 "peer_address": { 00:20:06.766 "trtype": "TCP", 00:20:06.766 "adrfam": "IPv4", 00:20:06.766 "traddr": "10.0.0.1", 00:20:06.766 "trsvcid": "59774" 00:20:06.766 }, 00:20:06.766 "auth": { 00:20:06.766 "state": "completed", 00:20:06.766 "digest": "sha256", 00:20:06.766 "dhgroup": "ffdhe8192" 00:20:06.766 } 00:20:06.766 } 00:20:06.766 ]' 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.766 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.023 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.958 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.216 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.474 00:20:08.474 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.474 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.474 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.734 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.734 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.734 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.734 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.734 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.734 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.734 { 00:20:08.734 "cntlid": 49, 00:20:08.734 "qid": 0, 00:20:08.734 "state": "enabled", 00:20:08.734 "listen_address": { 00:20:08.734 "trtype": "TCP", 00:20:08.734 "adrfam": "IPv4", 00:20:08.734 "traddr": "10.0.0.2", 00:20:08.734 "trsvcid": "4420" 00:20:08.734 }, 00:20:08.734 "peer_address": { 00:20:08.734 "trtype": "TCP", 00:20:08.734 "adrfam": "IPv4", 00:20:08.734 "traddr": "10.0.0.1", 00:20:08.734 "trsvcid": "36126" 00:20:08.734 }, 00:20:08.734 "auth": { 00:20:08.734 "state": "completed", 00:20:08.734 "digest": "sha384", 00:20:08.734 "dhgroup": "null" 00:20:08.734 } 00:20:08.734 } 00:20:08.734 ]' 00:20:08.734 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.992 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.992 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.992 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:08.992 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.992 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.992 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.992 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.250 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:20:10.187 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.187 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.187 18:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.187 18:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.187 18:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.187 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.187 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.187 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.445 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.703 00:20:10.703 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.703 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.703 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.961 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.961 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.961 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.961 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.962 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.962 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.962 { 00:20:10.962 "cntlid": 51, 00:20:10.962 "qid": 0, 00:20:10.962 "state": "enabled", 00:20:10.962 "listen_address": { 00:20:10.962 "trtype": "TCP", 00:20:10.962 "adrfam": "IPv4", 00:20:10.962 "traddr": "10.0.0.2", 00:20:10.962 "trsvcid": "4420" 00:20:10.962 }, 00:20:10.962 "peer_address": { 00:20:10.962 "trtype": "TCP", 00:20:10.962 "adrfam": "IPv4", 00:20:10.962 "traddr": "10.0.0.1", 00:20:10.962 "trsvcid": "36138" 00:20:10.962 }, 00:20:10.962 "auth": { 00:20:10.962 "state": "completed", 00:20:10.962 "digest": "sha384", 00:20:10.962 "dhgroup": "null" 00:20:10.962 } 00:20:10.962 } 00:20:10.962 ]' 00:20:11.219 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.219 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.219 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.219 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:11.219 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.219 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.219 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.219 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.476 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:20:12.408 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.409 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.409 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.409 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.409 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.409 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.409 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.409 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.667 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.925 00:20:12.925 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.925 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.925 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.183 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.183 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.183 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.183 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.183 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.183 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.183 { 00:20:13.183 "cntlid": 53, 00:20:13.183 "qid": 0, 00:20:13.183 "state": "enabled", 00:20:13.183 "listen_address": { 00:20:13.183 "trtype": "TCP", 00:20:13.183 "adrfam": "IPv4", 00:20:13.183 "traddr": "10.0.0.2", 00:20:13.183 "trsvcid": "4420" 00:20:13.183 }, 00:20:13.183 "peer_address": { 00:20:13.183 "trtype": "TCP", 00:20:13.183 "adrfam": "IPv4", 00:20:13.183 "traddr": "10.0.0.1", 00:20:13.183 "trsvcid": "36170" 00:20:13.183 }, 00:20:13.183 "auth": { 00:20:13.183 "state": "completed", 00:20:13.183 "digest": "sha384", 00:20:13.183 "dhgroup": "null" 00:20:13.183 } 00:20:13.183 } 00:20:13.183 ]' 00:20:13.183 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.183 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.183 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.440 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:13.441 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.441 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.441 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.441 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.700 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:20:14.634 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.634 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.634 18:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.634 18:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.634 18:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.634 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.634 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.634 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.892 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.150 00:20:15.150 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.150 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.150 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.408 { 00:20:15.408 "cntlid": 55, 00:20:15.408 "qid": 0, 00:20:15.408 "state": "enabled", 00:20:15.408 "listen_address": { 00:20:15.408 "trtype": "TCP", 00:20:15.408 "adrfam": "IPv4", 00:20:15.408 "traddr": "10.0.0.2", 00:20:15.408 "trsvcid": "4420" 00:20:15.408 }, 00:20:15.408 "peer_address": { 00:20:15.408 "trtype": "TCP", 00:20:15.408 "adrfam": "IPv4", 00:20:15.408 "traddr": "10.0.0.1", 00:20:15.408 "trsvcid": "36196" 00:20:15.408 }, 00:20:15.408 "auth": { 00:20:15.408 "state": "completed", 00:20:15.408 "digest": "sha384", 00:20:15.408 "dhgroup": "null" 00:20:15.408 } 00:20:15.408 } 00:20:15.408 ]' 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.408 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.667 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:15.667 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.667 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.667 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.667 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.979 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.916 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.174 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.433 00:20:17.433 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.433 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.433 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.691 { 00:20:17.691 "cntlid": 57, 00:20:17.691 "qid": 0, 00:20:17.691 "state": "enabled", 00:20:17.691 "listen_address": { 00:20:17.691 "trtype": "TCP", 00:20:17.691 "adrfam": "IPv4", 00:20:17.691 "traddr": "10.0.0.2", 00:20:17.691 "trsvcid": "4420" 00:20:17.691 }, 00:20:17.691 "peer_address": { 00:20:17.691 "trtype": "TCP", 00:20:17.691 "adrfam": "IPv4", 00:20:17.691 "traddr": "10.0.0.1", 00:20:17.691 "trsvcid": "36220" 00:20:17.691 }, 00:20:17.691 "auth": { 00:20:17.691 "state": "completed", 00:20:17.691 "digest": "sha384", 00:20:17.691 "dhgroup": "ffdhe2048" 00:20:17.691 } 00:20:17.691 } 00:20:17.691 ]' 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.691 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.986 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.986 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.986 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.986 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:20:18.922 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.922 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.922 18:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.922 18:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.922 18:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.922 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.922 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.922 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.502 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.503 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.762 00:20:19.762 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.762 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.762 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.020 { 00:20:20.020 "cntlid": 59, 00:20:20.020 "qid": 0, 00:20:20.020 "state": "enabled", 00:20:20.020 "listen_address": { 00:20:20.020 "trtype": "TCP", 00:20:20.020 "adrfam": "IPv4", 00:20:20.020 "traddr": "10.0.0.2", 00:20:20.020 "trsvcid": "4420" 00:20:20.020 }, 00:20:20.020 "peer_address": { 00:20:20.020 "trtype": "TCP", 00:20:20.020 "adrfam": "IPv4", 00:20:20.020 "traddr": "10.0.0.1", 00:20:20.020 "trsvcid": "49640" 00:20:20.020 }, 00:20:20.020 "auth": { 00:20:20.020 "state": "completed", 00:20:20.020 "digest": "sha384", 00:20:20.020 "dhgroup": "ffdhe2048" 00:20:20.020 } 00:20:20.020 } 00:20:20.020 ]' 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.020 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.277 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.650 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.907 00:20:22.165 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.165 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.165 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.165 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.165 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.165 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.165 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.425 { 00:20:22.425 "cntlid": 61, 00:20:22.425 "qid": 0, 00:20:22.425 "state": "enabled", 00:20:22.425 "listen_address": { 00:20:22.425 "trtype": "TCP", 00:20:22.425 "adrfam": "IPv4", 00:20:22.425 "traddr": "10.0.0.2", 00:20:22.425 "trsvcid": "4420" 00:20:22.425 }, 00:20:22.425 "peer_address": { 00:20:22.425 "trtype": "TCP", 00:20:22.425 "adrfam": "IPv4", 00:20:22.425 "traddr": "10.0.0.1", 00:20:22.425 "trsvcid": "49668" 00:20:22.425 }, 00:20:22.425 "auth": { 00:20:22.425 "state": "completed", 00:20:22.425 "digest": "sha384", 00:20:22.425 "dhgroup": "ffdhe2048" 00:20:22.425 } 00:20:22.425 } 00:20:22.425 ]' 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.425 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.682 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:20:23.616 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.616 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.616 18:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.616 18:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.616 18:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.616 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.616 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.616 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.876 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.442 00:20:24.442 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.442 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.442 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.700 { 00:20:24.700 "cntlid": 63, 00:20:24.700 "qid": 0, 00:20:24.700 "state": "enabled", 00:20:24.700 "listen_address": { 00:20:24.700 "trtype": "TCP", 00:20:24.700 "adrfam": "IPv4", 00:20:24.700 "traddr": "10.0.0.2", 00:20:24.700 "trsvcid": "4420" 00:20:24.700 }, 00:20:24.700 "peer_address": { 00:20:24.700 "trtype": "TCP", 00:20:24.700 "adrfam": "IPv4", 00:20:24.700 "traddr": "10.0.0.1", 00:20:24.700 "trsvcid": "49704" 00:20:24.700 }, 00:20:24.700 "auth": { 00:20:24.700 "state": "completed", 00:20:24.700 "digest": "sha384", 00:20:24.700 "dhgroup": "ffdhe2048" 00:20:24.700 } 00:20:24.700 } 00:20:24.700 ]' 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.700 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.959 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.892 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.149 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.715 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.715 { 00:20:26.715 "cntlid": 65, 00:20:26.715 "qid": 0, 00:20:26.715 "state": "enabled", 00:20:26.715 "listen_address": { 00:20:26.715 "trtype": "TCP", 00:20:26.715 "adrfam": "IPv4", 00:20:26.715 "traddr": "10.0.0.2", 00:20:26.715 "trsvcid": "4420" 00:20:26.715 }, 00:20:26.715 "peer_address": { 00:20:26.715 "trtype": "TCP", 00:20:26.715 "adrfam": "IPv4", 00:20:26.715 "traddr": "10.0.0.1", 00:20:26.715 "trsvcid": "49728" 00:20:26.715 }, 00:20:26.715 "auth": { 00:20:26.715 "state": "completed", 00:20:26.715 "digest": "sha384", 00:20:26.715 "dhgroup": "ffdhe3072" 00:20:26.715 } 00:20:26.715 } 00:20:26.715 ]' 00:20:26.715 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.973 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.973 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.973 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.973 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.973 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.973 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.973 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.231 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:20:28.166 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.166 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.166 18:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.166 18:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.166 18:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.166 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.166 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.166 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.424 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.682 00:20:28.682 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.682 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.682 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.940 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.940 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.940 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.940 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.940 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.941 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.941 { 00:20:28.941 "cntlid": 67, 00:20:28.941 "qid": 0, 00:20:28.941 "state": "enabled", 00:20:28.941 "listen_address": { 00:20:28.941 "trtype": "TCP", 00:20:28.941 "adrfam": "IPv4", 00:20:28.941 "traddr": "10.0.0.2", 00:20:28.941 "trsvcid": "4420" 00:20:28.941 }, 00:20:28.941 "peer_address": { 00:20:28.941 "trtype": "TCP", 00:20:28.941 "adrfam": "IPv4", 00:20:28.941 "traddr": "10.0.0.1", 00:20:28.941 "trsvcid": "50388" 00:20:28.941 }, 00:20:28.941 "auth": { 00:20:28.941 "state": "completed", 00:20:28.941 "digest": "sha384", 00:20:28.941 "dhgroup": "ffdhe3072" 00:20:28.941 } 00:20:28.941 } 00:20:28.941 ]' 00:20:28.941 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.941 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.941 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.199 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.199 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.199 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.199 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.199 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.458 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:20:30.396 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.396 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.396 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.396 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.396 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.396 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.396 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.396 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.654 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.912 00:20:30.912 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.912 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.912 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.170 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.170 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.170 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.170 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.170 18:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.170 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.170 { 00:20:31.170 "cntlid": 69, 00:20:31.170 "qid": 0, 00:20:31.170 "state": "enabled", 00:20:31.170 "listen_address": { 00:20:31.170 "trtype": "TCP", 00:20:31.170 "adrfam": "IPv4", 00:20:31.170 "traddr": "10.0.0.2", 00:20:31.170 "trsvcid": "4420" 00:20:31.170 }, 00:20:31.170 "peer_address": { 00:20:31.170 "trtype": "TCP", 00:20:31.170 "adrfam": "IPv4", 00:20:31.170 "traddr": "10.0.0.1", 00:20:31.170 "trsvcid": "50410" 00:20:31.170 }, 00:20:31.170 "auth": { 00:20:31.170 "state": "completed", 00:20:31.170 "digest": "sha384", 00:20:31.170 "dhgroup": "ffdhe3072" 00:20:31.170 } 00:20:31.170 } 00:20:31.170 ]' 00:20:31.171 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.171 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.171 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.171 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.171 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.429 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.429 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.429 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.698 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:20:32.653 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.654 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.654 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.654 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.654 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.654 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.654 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.654 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.911 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.168 00:20:33.168 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.168 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.168 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.427 { 00:20:33.427 "cntlid": 71, 00:20:33.427 "qid": 0, 00:20:33.427 "state": "enabled", 00:20:33.427 "listen_address": { 00:20:33.427 "trtype": "TCP", 00:20:33.427 "adrfam": "IPv4", 00:20:33.427 "traddr": "10.0.0.2", 00:20:33.427 "trsvcid": "4420" 00:20:33.427 }, 00:20:33.427 "peer_address": { 00:20:33.427 "trtype": "TCP", 00:20:33.427 "adrfam": "IPv4", 00:20:33.427 "traddr": "10.0.0.1", 00:20:33.427 "trsvcid": "50436" 00:20:33.427 }, 00:20:33.427 "auth": { 00:20:33.427 "state": "completed", 00:20:33.427 "digest": "sha384", 00:20:33.427 "dhgroup": "ffdhe3072" 00:20:33.427 } 00:20:33.427 } 00:20:33.427 ]' 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.427 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.685 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.685 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.685 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.943 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:34.876 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.134 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.392 00:20:35.392 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.392 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.392 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.650 { 00:20:35.650 "cntlid": 73, 00:20:35.650 "qid": 0, 00:20:35.650 "state": "enabled", 00:20:35.650 "listen_address": { 00:20:35.650 "trtype": "TCP", 00:20:35.650 "adrfam": "IPv4", 00:20:35.650 "traddr": "10.0.0.2", 00:20:35.650 "trsvcid": "4420" 00:20:35.650 }, 00:20:35.650 "peer_address": { 00:20:35.650 "trtype": "TCP", 00:20:35.650 "adrfam": "IPv4", 00:20:35.650 "traddr": "10.0.0.1", 00:20:35.650 "trsvcid": "50458" 00:20:35.650 }, 00:20:35.650 "auth": { 00:20:35.650 "state": "completed", 00:20:35.650 "digest": "sha384", 00:20:35.650 "dhgroup": "ffdhe4096" 00:20:35.650 } 00:20:35.650 } 00:20:35.650 ]' 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.650 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.908 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.908 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.908 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.165 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:20:37.097 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.097 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.097 18:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.097 18:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.097 18:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.097 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.097 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.097 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.354 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.612 00:20:37.612 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.612 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.612 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.872 { 00:20:37.872 "cntlid": 75, 00:20:37.872 "qid": 0, 00:20:37.872 "state": "enabled", 00:20:37.872 "listen_address": { 00:20:37.872 "trtype": "TCP", 00:20:37.872 "adrfam": "IPv4", 00:20:37.872 "traddr": "10.0.0.2", 00:20:37.872 "trsvcid": "4420" 00:20:37.872 }, 00:20:37.872 "peer_address": { 00:20:37.872 "trtype": "TCP", 00:20:37.872 "adrfam": "IPv4", 00:20:37.872 "traddr": "10.0.0.1", 00:20:37.872 "trsvcid": "60188" 00:20:37.872 }, 00:20:37.872 "auth": { 00:20:37.872 "state": "completed", 00:20:37.872 "digest": "sha384", 00:20:37.872 "dhgroup": "ffdhe4096" 00:20:37.872 } 00:20:37.872 } 00:20:37.872 ]' 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.872 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.131 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.131 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.131 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.131 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.131 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.388 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:20:39.320 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.320 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.320 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.320 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.320 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.320 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.320 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.320 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.577 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.834 00:20:39.834 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.834 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.834 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.400 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.400 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.400 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.400 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.400 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.400 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.400 { 00:20:40.400 "cntlid": 77, 00:20:40.400 "qid": 0, 00:20:40.400 "state": "enabled", 00:20:40.400 "listen_address": { 00:20:40.400 "trtype": "TCP", 00:20:40.400 "adrfam": "IPv4", 00:20:40.400 "traddr": "10.0.0.2", 00:20:40.400 "trsvcid": "4420" 00:20:40.400 }, 00:20:40.400 "peer_address": { 00:20:40.400 "trtype": "TCP", 00:20:40.400 "adrfam": "IPv4", 00:20:40.400 "traddr": "10.0.0.1", 00:20:40.400 "trsvcid": "60218" 00:20:40.400 }, 00:20:40.400 "auth": { 00:20:40.400 "state": "completed", 00:20:40.400 "digest": "sha384", 00:20:40.400 "dhgroup": "ffdhe4096" 00:20:40.400 } 00:20:40.400 } 00:20:40.400 ]' 00:20:40.400 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.400 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.400 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.400 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.400 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.400 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.400 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.400 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.658 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:20:41.590 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.590 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.590 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.590 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.590 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.590 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.591 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.591 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.848 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.414 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.414 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.414 { 00:20:42.414 "cntlid": 79, 00:20:42.414 "qid": 0, 00:20:42.414 "state": "enabled", 00:20:42.414 "listen_address": { 00:20:42.414 "trtype": "TCP", 00:20:42.414 "adrfam": "IPv4", 00:20:42.414 "traddr": "10.0.0.2", 00:20:42.414 "trsvcid": "4420" 00:20:42.414 }, 00:20:42.414 "peer_address": { 00:20:42.414 "trtype": "TCP", 00:20:42.414 "adrfam": "IPv4", 00:20:42.414 "traddr": "10.0.0.1", 00:20:42.414 "trsvcid": "60244" 00:20:42.414 }, 00:20:42.414 "auth": { 00:20:42.414 "state": "completed", 00:20:42.414 "digest": "sha384", 00:20:42.414 "dhgroup": "ffdhe4096" 00:20:42.414 } 00:20:42.414 } 00:20:42.414 ]' 00:20:42.671 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.671 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.671 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.671 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.671 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.672 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.672 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.672 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.928 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:43.861 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.119 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:44.119 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.120 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.687 00:20:44.687 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.687 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.687 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.945 { 00:20:44.945 "cntlid": 81, 00:20:44.945 "qid": 0, 00:20:44.945 "state": "enabled", 00:20:44.945 "listen_address": { 00:20:44.945 "trtype": "TCP", 00:20:44.945 "adrfam": "IPv4", 00:20:44.945 "traddr": "10.0.0.2", 00:20:44.945 "trsvcid": "4420" 00:20:44.945 }, 00:20:44.945 "peer_address": { 00:20:44.945 "trtype": "TCP", 00:20:44.945 "adrfam": "IPv4", 00:20:44.945 "traddr": "10.0.0.1", 00:20:44.945 "trsvcid": "60260" 00:20:44.945 }, 00:20:44.945 "auth": { 00:20:44.945 "state": "completed", 00:20:44.945 "digest": "sha384", 00:20:44.945 "dhgroup": "ffdhe6144" 00:20:44.945 } 00:20:44.945 } 00:20:44.945 ]' 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.945 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.203 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:20:46.135 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.135 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.135 18:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.135 18:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.136 18:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.136 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.136 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.136 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.395 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.959 00:20:46.959 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.959 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.959 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.217 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.217 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.217 18:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.217 18:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.217 18:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.217 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.217 { 00:20:47.217 "cntlid": 83, 00:20:47.217 "qid": 0, 00:20:47.217 "state": "enabled", 00:20:47.217 "listen_address": { 00:20:47.217 "trtype": "TCP", 00:20:47.217 "adrfam": "IPv4", 00:20:47.217 "traddr": "10.0.0.2", 00:20:47.217 "trsvcid": "4420" 00:20:47.217 }, 00:20:47.217 "peer_address": { 00:20:47.217 "trtype": "TCP", 00:20:47.217 "adrfam": "IPv4", 00:20:47.217 "traddr": "10.0.0.1", 00:20:47.217 "trsvcid": "60294" 00:20:47.217 }, 00:20:47.217 "auth": { 00:20:47.217 "state": "completed", 00:20:47.217 "digest": "sha384", 00:20:47.217 "dhgroup": "ffdhe6144" 00:20:47.217 } 00:20:47.217 } 00:20:47.217 ]' 00:20:47.217 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.476 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.476 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.476 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.476 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.476 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.476 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.476 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.734 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:20:48.679 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.679 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.679 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.679 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.679 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.679 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.679 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.679 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.982 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.548 00:20:49.548 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.548 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.548 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.806 { 00:20:49.806 "cntlid": 85, 00:20:49.806 "qid": 0, 00:20:49.806 "state": "enabled", 00:20:49.806 "listen_address": { 00:20:49.806 "trtype": "TCP", 00:20:49.806 "adrfam": "IPv4", 00:20:49.806 "traddr": "10.0.0.2", 00:20:49.806 "trsvcid": "4420" 00:20:49.806 }, 00:20:49.806 "peer_address": { 00:20:49.806 "trtype": "TCP", 00:20:49.806 "adrfam": "IPv4", 00:20:49.806 "traddr": "10.0.0.1", 00:20:49.806 "trsvcid": "58882" 00:20:49.806 }, 00:20:49.806 "auth": { 00:20:49.806 "state": "completed", 00:20:49.806 "digest": "sha384", 00:20:49.806 "dhgroup": "ffdhe6144" 00:20:49.806 } 00:20:49.806 } 00:20:49.806 ]' 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.806 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.065 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:20:51.004 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.004 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.004 18:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.004 18:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.004 18:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.004 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.004 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:51.004 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.262 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.828 00:20:51.828 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.828 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.828 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.086 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.086 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.086 18:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.086 18:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.086 18:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.086 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.086 { 00:20:52.086 "cntlid": 87, 00:20:52.086 "qid": 0, 00:20:52.086 "state": "enabled", 00:20:52.086 "listen_address": { 00:20:52.086 "trtype": "TCP", 00:20:52.086 "adrfam": "IPv4", 00:20:52.086 "traddr": "10.0.0.2", 00:20:52.087 "trsvcid": "4420" 00:20:52.087 }, 00:20:52.087 "peer_address": { 00:20:52.087 "trtype": "TCP", 00:20:52.087 "adrfam": "IPv4", 00:20:52.087 "traddr": "10.0.0.1", 00:20:52.087 "trsvcid": "58908" 00:20:52.087 }, 00:20:52.087 "auth": { 00:20:52.087 "state": "completed", 00:20:52.087 "digest": "sha384", 00:20:52.087 "dhgroup": "ffdhe6144" 00:20:52.087 } 00:20:52.087 } 00:20:52.087 ]' 00:20:52.087 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.344 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.344 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.344 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.344 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.344 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.344 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.344 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.612 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:20:53.548 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.548 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.548 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.548 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.548 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.548 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.548 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.548 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.549 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.806 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.807 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.745 00:20:54.745 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.745 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.745 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.003 { 00:20:55.003 "cntlid": 89, 00:20:55.003 "qid": 0, 00:20:55.003 "state": "enabled", 00:20:55.003 "listen_address": { 00:20:55.003 "trtype": "TCP", 00:20:55.003 "adrfam": "IPv4", 00:20:55.003 "traddr": "10.0.0.2", 00:20:55.003 "trsvcid": "4420" 00:20:55.003 }, 00:20:55.003 "peer_address": { 00:20:55.003 "trtype": "TCP", 00:20:55.003 "adrfam": "IPv4", 00:20:55.003 "traddr": "10.0.0.1", 00:20:55.003 "trsvcid": "58936" 00:20:55.003 }, 00:20:55.003 "auth": { 00:20:55.003 "state": "completed", 00:20:55.003 "digest": "sha384", 00:20:55.003 "dhgroup": "ffdhe8192" 00:20:55.003 } 00:20:55.003 } 00:20:55.003 ]' 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.003 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.004 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.004 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.004 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.004 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.004 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.262 18:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:20:56.256 18:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.256 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.256 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.257 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.257 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.257 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.257 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.257 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.515 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.449 00:20:57.449 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.449 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.449 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.708 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.708 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.708 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.708 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.708 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.708 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.708 { 00:20:57.708 "cntlid": 91, 00:20:57.708 "qid": 0, 00:20:57.708 "state": "enabled", 00:20:57.708 "listen_address": { 00:20:57.708 "trtype": "TCP", 00:20:57.708 "adrfam": "IPv4", 00:20:57.708 "traddr": "10.0.0.2", 00:20:57.708 "trsvcid": "4420" 00:20:57.709 }, 00:20:57.709 "peer_address": { 00:20:57.709 "trtype": "TCP", 00:20:57.709 "adrfam": "IPv4", 00:20:57.709 "traddr": "10.0.0.1", 00:20:57.709 "trsvcid": "58956" 00:20:57.709 }, 00:20:57.709 "auth": { 00:20:57.709 "state": "completed", 00:20:57.709 "digest": "sha384", 00:20:57.709 "dhgroup": "ffdhe8192" 00:20:57.709 } 00:20:57.709 } 00:20:57.709 ]' 00:20:57.709 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.709 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.709 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.709 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.709 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.709 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.709 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.709 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.967 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:20:58.900 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.900 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.900 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.900 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.900 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.900 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.900 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.900 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.159 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:59.159 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.159 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.159 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.159 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.160 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.160 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.160 18:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.160 18:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.419 18:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.419 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.419 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.356 00:21:00.356 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.356 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.356 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.356 { 00:21:00.356 "cntlid": 93, 00:21:00.356 "qid": 0, 00:21:00.356 "state": "enabled", 00:21:00.356 "listen_address": { 00:21:00.356 "trtype": "TCP", 00:21:00.356 "adrfam": "IPv4", 00:21:00.356 "traddr": "10.0.0.2", 00:21:00.356 "trsvcid": "4420" 00:21:00.356 }, 00:21:00.356 "peer_address": { 00:21:00.356 "trtype": "TCP", 00:21:00.356 "adrfam": "IPv4", 00:21:00.356 "traddr": "10.0.0.1", 00:21:00.356 "trsvcid": "55802" 00:21:00.356 }, 00:21:00.356 "auth": { 00:21:00.356 "state": "completed", 00:21:00.356 "digest": "sha384", 00:21:00.356 "dhgroup": "ffdhe8192" 00:21:00.356 } 00:21:00.356 } 00:21:00.356 ]' 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.356 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.614 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.614 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.614 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.614 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.614 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.871 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:21:01.809 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.809 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.809 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.809 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.809 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.809 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.809 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.809 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.067 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.003 00:21:03.003 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.003 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.003 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.261 { 00:21:03.261 "cntlid": 95, 00:21:03.261 "qid": 0, 00:21:03.261 "state": "enabled", 00:21:03.261 "listen_address": { 00:21:03.261 "trtype": "TCP", 00:21:03.261 "adrfam": "IPv4", 00:21:03.261 "traddr": "10.0.0.2", 00:21:03.261 "trsvcid": "4420" 00:21:03.261 }, 00:21:03.261 "peer_address": { 00:21:03.261 "trtype": "TCP", 00:21:03.261 "adrfam": "IPv4", 00:21:03.261 "traddr": "10.0.0.1", 00:21:03.261 "trsvcid": "55820" 00:21:03.261 }, 00:21:03.261 "auth": { 00:21:03.261 "state": "completed", 00:21:03.261 "digest": "sha384", 00:21:03.261 "dhgroup": "ffdhe8192" 00:21:03.261 } 00:21:03.261 } 00:21:03.261 ]' 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.261 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.519 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.519 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.520 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.778 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.714 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.973 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.233 00:21:05.233 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.233 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.233 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.521 { 00:21:05.521 "cntlid": 97, 00:21:05.521 "qid": 0, 00:21:05.521 "state": "enabled", 00:21:05.521 "listen_address": { 00:21:05.521 "trtype": "TCP", 00:21:05.521 "adrfam": "IPv4", 00:21:05.521 "traddr": "10.0.0.2", 00:21:05.521 "trsvcid": "4420" 00:21:05.521 }, 00:21:05.521 "peer_address": { 00:21:05.521 "trtype": "TCP", 00:21:05.521 "adrfam": "IPv4", 00:21:05.521 "traddr": "10.0.0.1", 00:21:05.521 "trsvcid": "55852" 00:21:05.521 }, 00:21:05.521 "auth": { 00:21:05.521 "state": "completed", 00:21:05.521 "digest": "sha512", 00:21:05.521 "dhgroup": "null" 00:21:05.521 } 00:21:05.521 } 00:21:05.521 ]' 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.521 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.781 18:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:21:06.718 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.718 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.718 18:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.718 18:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.718 18:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.718 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.718 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.718 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.975 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.543 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.543 { 00:21:07.543 "cntlid": 99, 00:21:07.543 "qid": 0, 00:21:07.543 "state": "enabled", 00:21:07.543 "listen_address": { 00:21:07.543 "trtype": "TCP", 00:21:07.543 "adrfam": "IPv4", 00:21:07.543 "traddr": "10.0.0.2", 00:21:07.543 "trsvcid": "4420" 00:21:07.543 }, 00:21:07.543 "peer_address": { 00:21:07.543 "trtype": "TCP", 00:21:07.543 "adrfam": "IPv4", 00:21:07.543 "traddr": "10.0.0.1", 00:21:07.543 "trsvcid": "55868" 00:21:07.543 }, 00:21:07.543 "auth": { 00:21:07.543 "state": "completed", 00:21:07.543 "digest": "sha512", 00:21:07.543 "dhgroup": "null" 00:21:07.543 } 00:21:07.543 } 00:21:07.543 ]' 00:21:07.543 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.800 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.800 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.800 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.800 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.800 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.801 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.801 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.058 18:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:21:08.993 18:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.993 18:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.993 18:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.993 18:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.993 18:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.993 18:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.993 18:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.993 18:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.252 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.510 00:21:09.510 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.510 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.510 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.768 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.768 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.768 18:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.768 18:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.768 18:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.768 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.768 { 00:21:09.768 "cntlid": 101, 00:21:09.768 "qid": 0, 00:21:09.768 "state": "enabled", 00:21:09.768 "listen_address": { 00:21:09.768 "trtype": "TCP", 00:21:09.768 "adrfam": "IPv4", 00:21:09.768 "traddr": "10.0.0.2", 00:21:09.768 "trsvcid": "4420" 00:21:09.768 }, 00:21:09.768 "peer_address": { 00:21:09.768 "trtype": "TCP", 00:21:09.768 "adrfam": "IPv4", 00:21:09.768 "traddr": "10.0.0.1", 00:21:09.768 "trsvcid": "57124" 00:21:09.768 }, 00:21:09.768 "auth": { 00:21:09.768 "state": "completed", 00:21:09.768 "digest": "sha512", 00:21:09.768 "dhgroup": "null" 00:21:09.768 } 00:21:09.768 } 00:21:09.768 ]' 00:21:09.768 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.025 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.025 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.025 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:10.025 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.025 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.026 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.026 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.284 18:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:21:11.220 18:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.220 18:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.220 18:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.220 18:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.220 18:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.220 18:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.220 18:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.220 18:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.479 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.736 00:21:11.736 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.736 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.736 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.994 { 00:21:11.994 "cntlid": 103, 00:21:11.994 "qid": 0, 00:21:11.994 "state": "enabled", 00:21:11.994 "listen_address": { 00:21:11.994 "trtype": "TCP", 00:21:11.994 "adrfam": "IPv4", 00:21:11.994 "traddr": "10.0.0.2", 00:21:11.994 "trsvcid": "4420" 00:21:11.994 }, 00:21:11.994 "peer_address": { 00:21:11.994 "trtype": "TCP", 00:21:11.994 "adrfam": "IPv4", 00:21:11.994 "traddr": "10.0.0.1", 00:21:11.994 "trsvcid": "57148" 00:21:11.994 }, 00:21:11.994 "auth": { 00:21:11.994 "state": "completed", 00:21:11.994 "digest": "sha512", 00:21:11.994 "dhgroup": "null" 00:21:11.994 } 00:21:11.994 } 00:21:11.994 ]' 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.994 18:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.254 18:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:21:13.190 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.448 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.448 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.448 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.448 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.448 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.448 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.448 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.448 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.705 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.965 00:21:13.965 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.965 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.965 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.223 { 00:21:14.223 "cntlid": 105, 00:21:14.223 "qid": 0, 00:21:14.223 "state": "enabled", 00:21:14.223 "listen_address": { 00:21:14.223 "trtype": "TCP", 00:21:14.223 "adrfam": "IPv4", 00:21:14.223 "traddr": "10.0.0.2", 00:21:14.223 "trsvcid": "4420" 00:21:14.223 }, 00:21:14.223 "peer_address": { 00:21:14.223 "trtype": "TCP", 00:21:14.223 "adrfam": "IPv4", 00:21:14.223 "traddr": "10.0.0.1", 00:21:14.223 "trsvcid": "57190" 00:21:14.223 }, 00:21:14.223 "auth": { 00:21:14.223 "state": "completed", 00:21:14.223 "digest": "sha512", 00:21:14.223 "dhgroup": "ffdhe2048" 00:21:14.223 } 00:21:14.223 } 00:21:14.223 ]' 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.223 18:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.223 18:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.223 18:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.223 18:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.223 18:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.223 18:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.482 18:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:21:15.418 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.418 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.418 18:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.418 18:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.418 18:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.418 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.418 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.418 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.676 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.244 00:21:16.244 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.244 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.244 18:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.244 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.244 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.244 18:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.244 18:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.244 18:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.244 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.244 { 00:21:16.244 "cntlid": 107, 00:21:16.244 "qid": 0, 00:21:16.244 "state": "enabled", 00:21:16.244 "listen_address": { 00:21:16.244 "trtype": "TCP", 00:21:16.244 "adrfam": "IPv4", 00:21:16.244 "traddr": "10.0.0.2", 00:21:16.244 "trsvcid": "4420" 00:21:16.244 }, 00:21:16.244 "peer_address": { 00:21:16.244 "trtype": "TCP", 00:21:16.244 "adrfam": "IPv4", 00:21:16.244 "traddr": "10.0.0.1", 00:21:16.244 "trsvcid": "57230" 00:21:16.244 }, 00:21:16.244 "auth": { 00:21:16.244 "state": "completed", 00:21:16.244 "digest": "sha512", 00:21:16.244 "dhgroup": "ffdhe2048" 00:21:16.244 } 00:21:16.244 } 00:21:16.244 ]' 00:21:16.244 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.501 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.501 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.501 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.501 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.501 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.501 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.501 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.759 18:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:21:17.700 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.700 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.700 18:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.700 18:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.700 18:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.700 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.700 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.700 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.958 18:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.215 00:21:18.216 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.216 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.216 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.473 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.473 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.473 18:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.473 18:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.473 18:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.473 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.473 { 00:21:18.473 "cntlid": 109, 00:21:18.473 "qid": 0, 00:21:18.473 "state": "enabled", 00:21:18.473 "listen_address": { 00:21:18.473 "trtype": "TCP", 00:21:18.473 "adrfam": "IPv4", 00:21:18.473 "traddr": "10.0.0.2", 00:21:18.473 "trsvcid": "4420" 00:21:18.473 }, 00:21:18.473 "peer_address": { 00:21:18.473 "trtype": "TCP", 00:21:18.473 "adrfam": "IPv4", 00:21:18.474 "traddr": "10.0.0.1", 00:21:18.474 "trsvcid": "56224" 00:21:18.474 }, 00:21:18.474 "auth": { 00:21:18.474 "state": "completed", 00:21:18.474 "digest": "sha512", 00:21:18.474 "dhgroup": "ffdhe2048" 00:21:18.474 } 00:21:18.474 } 00:21:18.474 ]' 00:21:18.474 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.731 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.731 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.731 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.731 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.731 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.731 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.731 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.991 18:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:21:19.928 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.928 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.928 18:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.928 18:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 18:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.928 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.928 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.928 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.185 18:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.443 00:21:20.443 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.443 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.443 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.699 { 00:21:20.699 "cntlid": 111, 00:21:20.699 "qid": 0, 00:21:20.699 "state": "enabled", 00:21:20.699 "listen_address": { 00:21:20.699 "trtype": "TCP", 00:21:20.699 "adrfam": "IPv4", 00:21:20.699 "traddr": "10.0.0.2", 00:21:20.699 "trsvcid": "4420" 00:21:20.699 }, 00:21:20.699 "peer_address": { 00:21:20.699 "trtype": "TCP", 00:21:20.699 "adrfam": "IPv4", 00:21:20.699 "traddr": "10.0.0.1", 00:21:20.699 "trsvcid": "56256" 00:21:20.699 }, 00:21:20.699 "auth": { 00:21:20.699 "state": "completed", 00:21:20.699 "digest": "sha512", 00:21:20.699 "dhgroup": "ffdhe2048" 00:21:20.699 } 00:21:20.699 } 00:21:20.699 ]' 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.699 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.957 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.957 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.957 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.227 18:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.203 18:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.203 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.771 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.771 { 00:21:22.771 "cntlid": 113, 00:21:22.771 "qid": 0, 00:21:22.771 "state": "enabled", 00:21:22.771 "listen_address": { 00:21:22.771 "trtype": "TCP", 00:21:22.771 "adrfam": "IPv4", 00:21:22.771 "traddr": "10.0.0.2", 00:21:22.771 "trsvcid": "4420" 00:21:22.771 }, 00:21:22.771 "peer_address": { 00:21:22.771 "trtype": "TCP", 00:21:22.771 "adrfam": "IPv4", 00:21:22.771 "traddr": "10.0.0.1", 00:21:22.771 "trsvcid": "56278" 00:21:22.771 }, 00:21:22.771 "auth": { 00:21:22.771 "state": "completed", 00:21:22.771 "digest": "sha512", 00:21:22.771 "dhgroup": "ffdhe3072" 00:21:22.771 } 00:21:22.771 } 00:21:22.771 ]' 00:21:22.771 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.028 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.029 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.029 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.029 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.029 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.029 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.029 18:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.286 18:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:21:24.220 18:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.220 18:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.220 18:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.220 18:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.220 18:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.220 18:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.220 18:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.220 18:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.480 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.738 00:21:24.738 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.738 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.738 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.996 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.996 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.996 18:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.996 18:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.996 18:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.996 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.996 { 00:21:24.996 "cntlid": 115, 00:21:24.996 "qid": 0, 00:21:24.996 "state": "enabled", 00:21:24.996 "listen_address": { 00:21:24.996 "trtype": "TCP", 00:21:24.996 "adrfam": "IPv4", 00:21:24.996 "traddr": "10.0.0.2", 00:21:24.996 "trsvcid": "4420" 00:21:24.996 }, 00:21:24.996 "peer_address": { 00:21:24.996 "trtype": "TCP", 00:21:24.996 "adrfam": "IPv4", 00:21:24.996 "traddr": "10.0.0.1", 00:21:24.996 "trsvcid": "56300" 00:21:24.996 }, 00:21:24.996 "auth": { 00:21:24.996 "state": "completed", 00:21:24.996 "digest": "sha512", 00:21:24.996 "dhgroup": "ffdhe3072" 00:21:24.996 } 00:21:24.996 } 00:21:24.996 ]' 00:21:24.996 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.254 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.254 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.254 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.254 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.254 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.254 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.254 18:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.512 18:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:21:26.450 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.450 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.450 18:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.450 18:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 18:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.450 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.450 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.450 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.708 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.275 00:21:27.275 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.275 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.275 18:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.275 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.275 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.275 18:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.275 18:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.275 18:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.275 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.275 { 00:21:27.275 "cntlid": 117, 00:21:27.275 "qid": 0, 00:21:27.275 "state": "enabled", 00:21:27.275 "listen_address": { 00:21:27.275 "trtype": "TCP", 00:21:27.275 "adrfam": "IPv4", 00:21:27.275 "traddr": "10.0.0.2", 00:21:27.275 "trsvcid": "4420" 00:21:27.275 }, 00:21:27.275 "peer_address": { 00:21:27.275 "trtype": "TCP", 00:21:27.275 "adrfam": "IPv4", 00:21:27.275 "traddr": "10.0.0.1", 00:21:27.275 "trsvcid": "56330" 00:21:27.275 }, 00:21:27.275 "auth": { 00:21:27.275 "state": "completed", 00:21:27.275 "digest": "sha512", 00:21:27.275 "dhgroup": "ffdhe3072" 00:21:27.275 } 00:21:27.275 } 00:21:27.275 ]' 00:21:27.275 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.533 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.533 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.533 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.533 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.533 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.533 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.533 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.790 18:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:21:28.722 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.722 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.722 18:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.722 18:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.722 18:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.722 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.722 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.722 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.980 18:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.237 00:21:29.237 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.237 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.237 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.495 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.495 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.495 18:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.495 18:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.495 18:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.495 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.495 { 00:21:29.495 "cntlid": 119, 00:21:29.495 "qid": 0, 00:21:29.495 "state": "enabled", 00:21:29.495 "listen_address": { 00:21:29.495 "trtype": "TCP", 00:21:29.495 "adrfam": "IPv4", 00:21:29.495 "traddr": "10.0.0.2", 00:21:29.495 "trsvcid": "4420" 00:21:29.495 }, 00:21:29.495 "peer_address": { 00:21:29.495 "trtype": "TCP", 00:21:29.495 "adrfam": "IPv4", 00:21:29.495 "traddr": "10.0.0.1", 00:21:29.495 "trsvcid": "40218" 00:21:29.495 }, 00:21:29.495 "auth": { 00:21:29.495 "state": "completed", 00:21:29.495 "digest": "sha512", 00:21:29.495 "dhgroup": "ffdhe3072" 00:21:29.495 } 00:21:29.495 } 00:21:29.495 ]' 00:21:29.495 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.753 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.753 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.753 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.753 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.753 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.753 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.753 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.010 18:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.944 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.201 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.202 18:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.461 00:21:31.718 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.718 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.718 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.718 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.718 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.718 18:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.718 18:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.975 { 00:21:31.975 "cntlid": 121, 00:21:31.975 "qid": 0, 00:21:31.975 "state": "enabled", 00:21:31.975 "listen_address": { 00:21:31.975 "trtype": "TCP", 00:21:31.975 "adrfam": "IPv4", 00:21:31.975 "traddr": "10.0.0.2", 00:21:31.975 "trsvcid": "4420" 00:21:31.975 }, 00:21:31.975 "peer_address": { 00:21:31.975 "trtype": "TCP", 00:21:31.975 "adrfam": "IPv4", 00:21:31.975 "traddr": "10.0.0.1", 00:21:31.975 "trsvcid": "40250" 00:21:31.975 }, 00:21:31.975 "auth": { 00:21:31.975 "state": "completed", 00:21:31.975 "digest": "sha512", 00:21:31.975 "dhgroup": "ffdhe4096" 00:21:31.975 } 00:21:31.975 } 00:21:31.975 ]' 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.975 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.232 18:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:21:33.166 18:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.166 18:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.166 18:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.166 18:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.166 18:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.166 18:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.166 18:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.166 18:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.424 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.682 00:21:33.682 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.682 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.682 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.941 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.941 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.941 18:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.941 18:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.199 { 00:21:34.199 "cntlid": 123, 00:21:34.199 "qid": 0, 00:21:34.199 "state": "enabled", 00:21:34.199 "listen_address": { 00:21:34.199 "trtype": "TCP", 00:21:34.199 "adrfam": "IPv4", 00:21:34.199 "traddr": "10.0.0.2", 00:21:34.199 "trsvcid": "4420" 00:21:34.199 }, 00:21:34.199 "peer_address": { 00:21:34.199 "trtype": "TCP", 00:21:34.199 "adrfam": "IPv4", 00:21:34.199 "traddr": "10.0.0.1", 00:21:34.199 "trsvcid": "40270" 00:21:34.199 }, 00:21:34.199 "auth": { 00:21:34.199 "state": "completed", 00:21:34.199 "digest": "sha512", 00:21:34.199 "dhgroup": "ffdhe4096" 00:21:34.199 } 00:21:34.199 } 00:21:34.199 ]' 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.199 18:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.456 18:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:21:35.390 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.390 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.390 18:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.390 18:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.390 18:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.390 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.390 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.390 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.649 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.215 00:21:36.215 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.215 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.215 18:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.215 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.215 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.215 18:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.215 18:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.215 18:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.215 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.215 { 00:21:36.215 "cntlid": 125, 00:21:36.215 "qid": 0, 00:21:36.215 "state": "enabled", 00:21:36.215 "listen_address": { 00:21:36.215 "trtype": "TCP", 00:21:36.215 "adrfam": "IPv4", 00:21:36.215 "traddr": "10.0.0.2", 00:21:36.215 "trsvcid": "4420" 00:21:36.215 }, 00:21:36.215 "peer_address": { 00:21:36.215 "trtype": "TCP", 00:21:36.215 "adrfam": "IPv4", 00:21:36.215 "traddr": "10.0.0.1", 00:21:36.215 "trsvcid": "40302" 00:21:36.215 }, 00:21:36.215 "auth": { 00:21:36.215 "state": "completed", 00:21:36.215 "digest": "sha512", 00:21:36.215 "dhgroup": "ffdhe4096" 00:21:36.215 } 00:21:36.215 } 00:21:36.215 ]' 00:21:36.215 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.473 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.473 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.473 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.473 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.473 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.473 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.473 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.731 18:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:21:37.694 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.694 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.694 18:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.694 18:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.694 18:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.694 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.694 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.695 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:37.961 18:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.219 00:21:38.219 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.219 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.219 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.477 { 00:21:38.477 "cntlid": 127, 00:21:38.477 "qid": 0, 00:21:38.477 "state": "enabled", 00:21:38.477 "listen_address": { 00:21:38.477 "trtype": "TCP", 00:21:38.477 "adrfam": "IPv4", 00:21:38.477 "traddr": "10.0.0.2", 00:21:38.477 "trsvcid": "4420" 00:21:38.477 }, 00:21:38.477 "peer_address": { 00:21:38.477 "trtype": "TCP", 00:21:38.477 "adrfam": "IPv4", 00:21:38.477 "traddr": "10.0.0.1", 00:21:38.477 "trsvcid": "44554" 00:21:38.477 }, 00:21:38.477 "auth": { 00:21:38.477 "state": "completed", 00:21:38.477 "digest": "sha512", 00:21:38.477 "dhgroup": "ffdhe4096" 00:21:38.477 } 00:21:38.477 } 00:21:38.477 ]' 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.477 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.735 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.735 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.735 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.735 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.735 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.992 18:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.929 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.187 18:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.753 00:21:40.753 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.753 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.753 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.011 { 00:21:41.011 "cntlid": 129, 00:21:41.011 "qid": 0, 00:21:41.011 "state": "enabled", 00:21:41.011 "listen_address": { 00:21:41.011 "trtype": "TCP", 00:21:41.011 "adrfam": "IPv4", 00:21:41.011 "traddr": "10.0.0.2", 00:21:41.011 "trsvcid": "4420" 00:21:41.011 }, 00:21:41.011 "peer_address": { 00:21:41.011 "trtype": "TCP", 00:21:41.011 "adrfam": "IPv4", 00:21:41.011 "traddr": "10.0.0.1", 00:21:41.011 "trsvcid": "44582" 00:21:41.011 }, 00:21:41.011 "auth": { 00:21:41.011 "state": "completed", 00:21:41.011 "digest": "sha512", 00:21:41.011 "dhgroup": "ffdhe6144" 00:21:41.011 } 00:21:41.011 } 00:21:41.011 ]' 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.011 18:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.271 18:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.646 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.212 00:21:43.212 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.212 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.212 18:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.470 { 00:21:43.470 "cntlid": 131, 00:21:43.470 "qid": 0, 00:21:43.470 "state": "enabled", 00:21:43.470 "listen_address": { 00:21:43.470 "trtype": "TCP", 00:21:43.470 "adrfam": "IPv4", 00:21:43.470 "traddr": "10.0.0.2", 00:21:43.470 "trsvcid": "4420" 00:21:43.470 }, 00:21:43.470 "peer_address": { 00:21:43.470 "trtype": "TCP", 00:21:43.470 "adrfam": "IPv4", 00:21:43.470 "traddr": "10.0.0.1", 00:21:43.470 "trsvcid": "44610" 00:21:43.470 }, 00:21:43.470 "auth": { 00:21:43.470 "state": "completed", 00:21:43.470 "digest": "sha512", 00:21:43.470 "dhgroup": "ffdhe6144" 00:21:43.470 } 00:21:43.470 } 00:21:43.470 ]' 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.470 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.728 18:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:21:44.664 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.664 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.664 18:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.664 18:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.664 18:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.664 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.664 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.664 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.922 18:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.489 00:21:45.489 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.489 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.489 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.747 { 00:21:45.747 "cntlid": 133, 00:21:45.747 "qid": 0, 00:21:45.747 "state": "enabled", 00:21:45.747 "listen_address": { 00:21:45.747 "trtype": "TCP", 00:21:45.747 "adrfam": "IPv4", 00:21:45.747 "traddr": "10.0.0.2", 00:21:45.747 "trsvcid": "4420" 00:21:45.747 }, 00:21:45.747 "peer_address": { 00:21:45.747 "trtype": "TCP", 00:21:45.747 "adrfam": "IPv4", 00:21:45.747 "traddr": "10.0.0.1", 00:21:45.747 "trsvcid": "44628" 00:21:45.747 }, 00:21:45.747 "auth": { 00:21:45.747 "state": "completed", 00:21:45.747 "digest": "sha512", 00:21:45.747 "dhgroup": "ffdhe6144" 00:21:45.747 } 00:21:45.747 } 00:21:45.747 ]' 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.747 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.005 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.005 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.005 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.005 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.005 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.263 18:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:21:47.199 18:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.199 18:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.199 18:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.199 18:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.199 18:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.199 18:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.199 18:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.199 18:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.457 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.051 00:21:48.051 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.051 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.051 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.335 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.335 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.335 18:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.335 18:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.335 18:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.335 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.335 { 00:21:48.335 "cntlid": 135, 00:21:48.335 "qid": 0, 00:21:48.335 "state": "enabled", 00:21:48.335 "listen_address": { 00:21:48.335 "trtype": "TCP", 00:21:48.335 "adrfam": "IPv4", 00:21:48.335 "traddr": "10.0.0.2", 00:21:48.335 "trsvcid": "4420" 00:21:48.335 }, 00:21:48.335 "peer_address": { 00:21:48.335 "trtype": "TCP", 00:21:48.335 "adrfam": "IPv4", 00:21:48.335 "traddr": "10.0.0.1", 00:21:48.335 "trsvcid": "40668" 00:21:48.335 }, 00:21:48.335 "auth": { 00:21:48.335 "state": "completed", 00:21:48.335 "digest": "sha512", 00:21:48.335 "dhgroup": "ffdhe6144" 00:21:48.335 } 00:21:48.335 } 00:21:48.335 ]' 00:21:48.335 18:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.335 18:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.335 18:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.335 18:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:48.335 18:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.335 18:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.335 18:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.335 18:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.593 18:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.532 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.790 18:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.724 00:21:50.724 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.724 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.724 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.982 { 00:21:50.982 "cntlid": 137, 00:21:50.982 "qid": 0, 00:21:50.982 "state": "enabled", 00:21:50.982 "listen_address": { 00:21:50.982 "trtype": "TCP", 00:21:50.982 "adrfam": "IPv4", 00:21:50.982 "traddr": "10.0.0.2", 00:21:50.982 "trsvcid": "4420" 00:21:50.982 }, 00:21:50.982 "peer_address": { 00:21:50.982 "trtype": "TCP", 00:21:50.982 "adrfam": "IPv4", 00:21:50.982 "traddr": "10.0.0.1", 00:21:50.982 "trsvcid": "40708" 00:21:50.982 }, 00:21:50.982 "auth": { 00:21:50.982 "state": "completed", 00:21:50.982 "digest": "sha512", 00:21:50.982 "dhgroup": "ffdhe8192" 00:21:50.982 } 00:21:50.982 } 00:21:50.982 ]' 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.982 18:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.241 18:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.616 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.617 18:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.550 00:21:53.550 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.550 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.550 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.809 { 00:21:53.809 "cntlid": 139, 00:21:53.809 "qid": 0, 00:21:53.809 "state": "enabled", 00:21:53.809 "listen_address": { 00:21:53.809 "trtype": "TCP", 00:21:53.809 "adrfam": "IPv4", 00:21:53.809 "traddr": "10.0.0.2", 00:21:53.809 "trsvcid": "4420" 00:21:53.809 }, 00:21:53.809 "peer_address": { 00:21:53.809 "trtype": "TCP", 00:21:53.809 "adrfam": "IPv4", 00:21:53.809 "traddr": "10.0.0.1", 00:21:53.809 "trsvcid": "40738" 00:21:53.809 }, 00:21:53.809 "auth": { 00:21:53.809 "state": "completed", 00:21:53.809 "digest": "sha512", 00:21:53.809 "dhgroup": "ffdhe8192" 00:21:53.809 } 00:21:53.809 } 00:21:53.809 ]' 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.809 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.067 18:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ODkyMTljNDFjMDRlOWE2YzQwNjI4ZmVhYjliNzcxMWKYs191: --dhchap-ctrl-secret DHHC-1:02:MDJiOTEzZDhlODQ1OWZjZGQ1NGNkNWUwMmJlMjcyYjJjZWViNmFhMTc5ZWE2MzMx1I7L1Q==: 00:21:54.999 18:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.999 18:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.999 18:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.258 18:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.258 18:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.258 18:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.258 18:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.258 18:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.516 18:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.453 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.453 { 00:21:56.453 "cntlid": 141, 00:21:56.453 "qid": 0, 00:21:56.453 "state": "enabled", 00:21:56.453 "listen_address": { 00:21:56.453 "trtype": "TCP", 00:21:56.453 "adrfam": "IPv4", 00:21:56.453 "traddr": "10.0.0.2", 00:21:56.453 "trsvcid": "4420" 00:21:56.453 }, 00:21:56.453 "peer_address": { 00:21:56.453 "trtype": "TCP", 00:21:56.453 "adrfam": "IPv4", 00:21:56.453 "traddr": "10.0.0.1", 00:21:56.453 "trsvcid": "40762" 00:21:56.453 }, 00:21:56.453 "auth": { 00:21:56.453 "state": "completed", 00:21:56.453 "digest": "sha512", 00:21:56.453 "dhgroup": "ffdhe8192" 00:21:56.453 } 00:21:56.453 } 00:21:56.453 ]' 00:21:56.453 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.712 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.712 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.712 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.712 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.712 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.712 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.712 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.969 18:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OTY3Mjc0ZjU0YThmOWMzYzQxM2U3MmUxOTg5YjJjM2Y5ZjU5OGRlZjUxYzgzZmYwjHwhbA==: --dhchap-ctrl-secret DHHC-1:01:Yjc2MTU3NTAwMTg4YmVlZTBhMWJjYTc5Y2JjZjI4ZTWacF+i: 00:21:57.904 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.904 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.904 18:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.904 18:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.904 18:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.904 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.904 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.904 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.161 18:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.096 00:21:59.096 18:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.096 18:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.096 18:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.355 { 00:21:59.355 "cntlid": 143, 00:21:59.355 "qid": 0, 00:21:59.355 "state": "enabled", 00:21:59.355 "listen_address": { 00:21:59.355 "trtype": "TCP", 00:21:59.355 "adrfam": "IPv4", 00:21:59.355 "traddr": "10.0.0.2", 00:21:59.355 "trsvcid": "4420" 00:21:59.355 }, 00:21:59.355 "peer_address": { 00:21:59.355 "trtype": "TCP", 00:21:59.355 "adrfam": "IPv4", 00:21:59.355 "traddr": "10.0.0.1", 00:21:59.355 "trsvcid": "59080" 00:21:59.355 }, 00:21:59.355 "auth": { 00:21:59.355 "state": "completed", 00:21:59.355 "digest": "sha512", 00:21:59.355 "dhgroup": "ffdhe8192" 00:21:59.355 } 00:21:59.355 } 00:21:59.355 ]' 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.355 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.613 18:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:00.549 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.117 18:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.118 18:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.118 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.118 18:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.061 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.061 { 00:22:02.061 "cntlid": 145, 00:22:02.061 "qid": 0, 00:22:02.061 "state": "enabled", 00:22:02.061 "listen_address": { 00:22:02.061 "trtype": "TCP", 00:22:02.061 "adrfam": "IPv4", 00:22:02.061 "traddr": "10.0.0.2", 00:22:02.061 "trsvcid": "4420" 00:22:02.061 }, 00:22:02.061 "peer_address": { 00:22:02.061 "trtype": "TCP", 00:22:02.061 "adrfam": "IPv4", 00:22:02.061 "traddr": "10.0.0.1", 00:22:02.061 "trsvcid": "59114" 00:22:02.061 }, 00:22:02.061 "auth": { 00:22:02.061 "state": "completed", 00:22:02.061 "digest": "sha512", 00:22:02.061 "dhgroup": "ffdhe8192" 00:22:02.061 } 00:22:02.061 } 00:22:02.061 ]' 00:22:02.061 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.319 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.319 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.319 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.319 18:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.319 18:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.319 18:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.319 18:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.577 18:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkwNjdkNTNmMDUyNmMwMjlkNzI0M2EwZDI0ZTlhNDk5YmRiNGY1MTc3YjAxNjU0zV1o4g==: --dhchap-ctrl-secret DHHC-1:03:YmRjMDlmZDVmYTJmM2NlMmU5ZTNjNzE3MmU3OTY2MjhiMTMxMTYzZTJhMzAyZmIwNTYzYTVmMzlkYTlhMzE5Nvy5dtw=: 00:22:03.513 18:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.513 18:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.513 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.513 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:03.514 18:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:04.451 request: 00:22:04.451 { 00:22:04.451 "name": "nvme0", 00:22:04.451 "trtype": "tcp", 00:22:04.451 "traddr": "10.0.0.2", 00:22:04.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.451 "adrfam": "ipv4", 00:22:04.451 "trsvcid": "4420", 00:22:04.451 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.451 "dhchap_key": "key2", 00:22:04.451 "method": "bdev_nvme_attach_controller", 00:22:04.451 "req_id": 1 00:22:04.451 } 00:22:04.451 Got JSON-RPC error response 00:22:04.451 response: 00:22:04.451 { 00:22:04.452 "code": -5, 00:22:04.452 "message": "Input/output error" 00:22:04.452 } 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.452 18:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.388 request: 00:22:05.388 { 00:22:05.388 "name": "nvme0", 00:22:05.388 "trtype": "tcp", 00:22:05.388 "traddr": "10.0.0.2", 00:22:05.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.388 "adrfam": "ipv4", 00:22:05.388 "trsvcid": "4420", 00:22:05.388 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.388 "dhchap_key": "key1", 00:22:05.388 "dhchap_ctrlr_key": "ckey2", 00:22:05.388 "method": "bdev_nvme_attach_controller", 00:22:05.388 "req_id": 1 00:22:05.388 } 00:22:05.388 Got JSON-RPC error response 00:22:05.388 response: 00:22:05.388 { 00:22:05.388 "code": -5, 00:22:05.388 "message": "Input/output error" 00:22:05.388 } 00:22:05.388 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:05.388 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:05.388 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:05.388 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.389 18:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.328 request: 00:22:06.328 { 00:22:06.328 "name": "nvme0", 00:22:06.329 "trtype": "tcp", 00:22:06.329 "traddr": "10.0.0.2", 00:22:06.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.329 "adrfam": "ipv4", 00:22:06.329 "trsvcid": "4420", 00:22:06.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.329 "dhchap_key": "key1", 00:22:06.329 "dhchap_ctrlr_key": "ckey1", 00:22:06.329 "method": "bdev_nvme_attach_controller", 00:22:06.329 "req_id": 1 00:22:06.329 } 00:22:06.329 Got JSON-RPC error response 00:22:06.329 response: 00:22:06.329 { 00:22:06.329 "code": -5, 00:22:06.329 "message": "Input/output error" 00:22:06.329 } 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3542380 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3542380 ']' 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3542380 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:06.329 18:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3542380 00:22:06.329 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:06.329 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:06.329 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3542380' 00:22:06.329 killing process with pid 3542380 00:22:06.329 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3542380 00:22:06.329 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3542380 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3564917 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3564917 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3564917 ']' 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:06.590 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3564917 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3564917 ']' 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:06.849 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.108 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.367 18:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.367 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.367 18:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.303 00:22:08.304 18:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.304 18:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.304 18:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.304 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.304 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.304 18:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.304 18:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.562 { 00:22:08.562 "cntlid": 1, 00:22:08.562 "qid": 0, 00:22:08.562 "state": "enabled", 00:22:08.562 "listen_address": { 00:22:08.562 "trtype": "TCP", 00:22:08.562 "adrfam": "IPv4", 00:22:08.562 "traddr": "10.0.0.2", 00:22:08.562 "trsvcid": "4420" 00:22:08.562 }, 00:22:08.562 "peer_address": { 00:22:08.562 "trtype": "TCP", 00:22:08.562 "adrfam": "IPv4", 00:22:08.562 "traddr": "10.0.0.1", 00:22:08.562 "trsvcid": "33892" 00:22:08.562 }, 00:22:08.562 "auth": { 00:22:08.562 "state": "completed", 00:22:08.562 "digest": "sha512", 00:22:08.562 "dhgroup": "ffdhe8192" 00:22:08.562 } 00:22:08.562 } 00:22:08.562 ]' 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.562 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.820 18:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZTM0MTkwMzFiNTU3OTc3NzA1NTMxMWQzYzkxNjdmNzYzY2Q1YTQ4ZGNmYzY5Nzc0MGNlNTk0NzE0Mjc1ZjljY+WRr8k=: 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:09.757 18:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.014 18:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.273 request: 00:22:10.273 { 00:22:10.273 "name": "nvme0", 00:22:10.273 "trtype": "tcp", 00:22:10.273 "traddr": "10.0.0.2", 00:22:10.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.273 "adrfam": "ipv4", 00:22:10.273 "trsvcid": "4420", 00:22:10.273 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.273 "dhchap_key": "key3", 00:22:10.273 "method": "bdev_nvme_attach_controller", 00:22:10.273 "req_id": 1 00:22:10.273 } 00:22:10.273 Got JSON-RPC error response 00:22:10.273 response: 00:22:10.273 { 00:22:10.273 "code": -5, 00:22:10.273 "message": "Input/output error" 00:22:10.273 } 00:22:10.273 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:10.273 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:10.273 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:10.273 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:10.273 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:10.273 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:10.273 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:10.273 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.841 request: 00:22:10.841 { 00:22:10.841 "name": "nvme0", 00:22:10.841 "trtype": "tcp", 00:22:10.841 "traddr": "10.0.0.2", 00:22:10.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.841 "adrfam": "ipv4", 00:22:10.841 "trsvcid": "4420", 00:22:10.841 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.841 "dhchap_key": "key3", 00:22:10.841 "method": "bdev_nvme_attach_controller", 00:22:10.841 "req_id": 1 00:22:10.841 } 00:22:10.841 Got JSON-RPC error response 00:22:10.841 response: 00:22:10.841 { 00:22:10.841 "code": -5, 00:22:10.841 "message": "Input/output error" 00:22:10.841 } 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.841 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.099 18:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.357 request: 00:22:11.357 { 00:22:11.357 "name": "nvme0", 00:22:11.357 "trtype": "tcp", 00:22:11.357 "traddr": "10.0.0.2", 00:22:11.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.357 "adrfam": "ipv4", 00:22:11.357 "trsvcid": "4420", 00:22:11.357 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.357 "dhchap_key": "key0", 00:22:11.357 "dhchap_ctrlr_key": "key1", 00:22:11.357 "method": "bdev_nvme_attach_controller", 00:22:11.357 "req_id": 1 00:22:11.357 } 00:22:11.357 Got JSON-RPC error response 00:22:11.357 response: 00:22:11.357 { 00:22:11.357 "code": -5, 00:22:11.357 "message": "Input/output error" 00:22:11.357 } 00:22:11.357 18:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:11.357 18:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.357 18:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:11.357 18:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.357 18:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:11.357 18:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:11.994 00:22:11.994 18:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:11.994 18:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:11.994 18:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.994 18:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.994 18:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.994 18:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3542406 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3542406 ']' 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3542406 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3542406 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3542406' 00:22:12.255 killing process with pid 3542406 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3542406 00:22:12.255 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3542406 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.825 rmmod nvme_tcp 00:22:12.825 rmmod nvme_fabrics 00:22:12.825 rmmod nvme_keyring 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3564917 ']' 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3564917 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3564917 ']' 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3564917 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3564917 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3564917' 00:22:12.825 killing process with pid 3564917 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3564917 00:22:12.825 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3564917 00:22:13.085 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:13.085 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:13.085 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:13.085 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.085 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:13.085 18:55:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.085 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.085 18:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.990 18:55:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.990 18:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hAX /tmp/spdk.key-sha256.JBb /tmp/spdk.key-sha384.sGP /tmp/spdk.key-sha512.KIF /tmp/spdk.key-sha512.xZg /tmp/spdk.key-sha384.m67 /tmp/spdk.key-sha256.KA2 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:14.990 00:22:14.990 real 3m9.042s 00:22:14.990 user 7m20.363s 00:22:14.990 sys 0m24.855s 00:22:14.990 18:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:14.990 18:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.990 ************************************ 00:22:14.990 END TEST nvmf_auth_target 00:22:14.990 ************************************ 00:22:14.990 18:55:26 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:14.990 18:55:26 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:14.990 18:55:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:14.990 18:55:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:14.990 18:55:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:15.248 ************************************ 00:22:15.248 START TEST nvmf_bdevio_no_huge 00:22:15.248 ************************************ 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:15.248 * Looking for test storage... 00:22:15.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:15.248 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.249 18:55:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:17.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:17.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:17.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:17.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.150 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.151 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.151 18:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.151 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:22:17.409 00:22:17.409 --- 10.0.0.2 ping statistics --- 00:22:17.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.409 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:22:17.409 00:22:17.409 --- 10.0.0.1 ping statistics --- 00:22:17.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.409 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3567677 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3567677 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3567677 ']' 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:17.409 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.409 [2024-07-25 18:55:29.119500] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:17.409 [2024-07-25 18:55:29.119575] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:17.409 [2024-07-25 18:55:29.189923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.409 [2024-07-25 18:55:29.268302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.409 [2024-07-25 18:55:29.268356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.409 [2024-07-25 18:55:29.268384] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.409 [2024-07-25 18:55:29.268396] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.409 [2024-07-25 18:55:29.268406] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.409 [2024-07-25 18:55:29.268457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:17.409 [2024-07-25 18:55:29.268520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:17.409 [2024-07-25 18:55:29.268584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:17.409 [2024-07-25 18:55:29.268587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.669 [2024-07-25 18:55:29.397830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.669 Malloc0 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.669 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.670 [2024-07-25 18:55:29.438189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.670 { 00:22:17.670 "params": { 00:22:17.670 "name": "Nvme$subsystem", 00:22:17.670 "trtype": "$TEST_TRANSPORT", 00:22:17.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.670 "adrfam": "ipv4", 00:22:17.670 "trsvcid": "$NVMF_PORT", 00:22:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.670 "hdgst": ${hdgst:-false}, 00:22:17.670 "ddgst": ${ddgst:-false} 00:22:17.670 }, 00:22:17.670 "method": "bdev_nvme_attach_controller" 00:22:17.670 } 00:22:17.670 EOF 00:22:17.670 )") 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:17.670 18:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:17.670 "params": { 00:22:17.670 "name": "Nvme1", 00:22:17.670 "trtype": "tcp", 00:22:17.670 "traddr": "10.0.0.2", 00:22:17.670 "adrfam": "ipv4", 00:22:17.670 "trsvcid": "4420", 00:22:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.670 "hdgst": false, 00:22:17.670 "ddgst": false 00:22:17.670 }, 00:22:17.670 "method": "bdev_nvme_attach_controller" 00:22:17.670 }' 00:22:17.670 [2024-07-25 18:55:29.484193] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:17.670 [2024-07-25 18:55:29.484261] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3567705 ] 00:22:17.670 [2024-07-25 18:55:29.543252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.930 [2024-07-25 18:55:29.631213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.930 [2024-07-25 18:55:29.631264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.930 [2024-07-25 18:55:29.631267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.190 I/O targets: 00:22:18.190 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:18.190 00:22:18.190 00:22:18.190 CUnit - A unit testing framework for C - Version 2.1-3 00:22:18.190 http://cunit.sourceforge.net/ 00:22:18.190 00:22:18.190 00:22:18.190 Suite: bdevio tests on: Nvme1n1 00:22:18.190 Test: blockdev write read block ...passed 00:22:18.190 Test: blockdev write zeroes read block ...passed 00:22:18.190 Test: blockdev write zeroes read no split ...passed 00:22:18.190 Test: blockdev write zeroes read split ...passed 00:22:18.190 Test: blockdev write zeroes read split partial ...passed 00:22:18.190 Test: blockdev reset ...[2024-07-25 18:55:29.997443] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:18.190 [2024-07-25 18:55:29.997558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a82a0 (9): Bad file descriptor 00:22:18.190 [2024-07-25 18:55:30.010479] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:18.190 passed 00:22:18.190 Test: blockdev write read 8 blocks ...passed 00:22:18.190 Test: blockdev write read size > 128k ...passed 00:22:18.190 Test: blockdev write read invalid size ...passed 00:22:18.190 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:18.190 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:18.190 Test: blockdev write read max offset ...passed 00:22:18.448 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:18.448 Test: blockdev writev readv 8 blocks ...passed 00:22:18.448 Test: blockdev writev readv 30 x 1block ...passed 00:22:18.448 Test: blockdev writev readv block ...passed 00:22:18.448 Test: blockdev writev readv size > 128k ...passed 00:22:18.448 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:18.448 Test: blockdev comparev and writev ...[2024-07-25 18:55:30.269030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.448 [2024-07-25 18:55:30.269107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.448 [2024-07-25 18:55:30.269144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.448 [2024-07-25 18:55:30.269170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.449 [2024-07-25 18:55:30.269531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.449 [2024-07-25 18:55:30.269565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.449 [2024-07-25 18:55:30.269603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.449 [2024-07-25 18:55:30.269630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.449 [2024-07-25 18:55:30.269977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.449 [2024-07-25 18:55:30.270003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.449 [2024-07-25 18:55:30.270038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.449 [2024-07-25 18:55:30.270073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.449 [2024-07-25 18:55:30.270429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.449 [2024-07-25 18:55:30.270455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.449 [2024-07-25 18:55:30.270489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:18.449 [2024-07-25 18:55:30.270516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.449 passed 00:22:18.706 Test: blockdev nvme passthru rw ...passed 00:22:18.706 Test: blockdev nvme passthru vendor specific ...[2024-07-25 18:55:30.353350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.706 [2024-07-25 18:55:30.353380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.706 [2024-07-25 18:55:30.353559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.706 [2024-07-25 18:55:30.353585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.706 [2024-07-25 18:55:30.353758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.707 [2024-07-25 18:55:30.353783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.707 [2024-07-25 18:55:30.353965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:18.707 [2024-07-25 18:55:30.353990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.707 passed 00:22:18.707 Test: blockdev nvme admin passthru ...passed 00:22:18.707 Test: blockdev copy ...passed 00:22:18.707 00:22:18.707 Run Summary: Type Total Ran Passed Failed Inactive 00:22:18.707 suites 1 1 n/a 0 0 00:22:18.707 tests 23 23 23 0 0 00:22:18.707 asserts 152 152 152 0 n/a 00:22:18.707 00:22:18.707 Elapsed time = 1.156 seconds 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:18.964 rmmod nvme_tcp 00:22:18.964 rmmod nvme_fabrics 00:22:18.964 rmmod nvme_keyring 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3567677 ']' 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3567677 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3567677 ']' 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3567677 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3567677 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3567677' 00:22:18.964 killing process with pid 3567677 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3567677 00:22:18.964 18:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3567677 00:22:19.528 18:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:19.528 18:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:19.528 18:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:19.528 18:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.528 18:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:19.528 18:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.528 18:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.528 18:55:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.430 18:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:21.430 00:22:21.430 real 0m6.323s 00:22:21.430 user 0m10.074s 00:22:21.430 sys 0m2.446s 00:22:21.430 18:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:21.430 18:55:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.430 ************************************ 00:22:21.430 END TEST nvmf_bdevio_no_huge 00:22:21.430 ************************************ 00:22:21.430 18:55:33 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:21.430 18:55:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:21.430 18:55:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:21.430 18:55:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:21.430 ************************************ 00:22:21.430 START TEST nvmf_tls 00:22:21.430 ************************************ 00:22:21.430 18:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:21.430 * Looking for test storage... 00:22:21.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.686 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:21.687 18:55:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:23.588 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.588 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:23.589 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:23.589 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:23.589 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:23.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:22:23.589 00:22:23.589 --- 10.0.0.2 ping statistics --- 00:22:23.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.589 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:22:23.589 00:22:23.589 --- 10.0.0.1 ping statistics --- 00:22:23.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.589 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:23.589 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3569886 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3569886 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3569886 ']' 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.848 [2024-07-25 18:55:35.510991] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:23.848 [2024-07-25 18:55:35.511125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.848 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.848 [2024-07-25 18:55:35.576646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.848 [2024-07-25 18:55:35.665311] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.848 [2024-07-25 18:55:35.665386] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.848 [2024-07-25 18:55:35.665399] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.848 [2024-07-25 18:55:35.665410] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.848 [2024-07-25 18:55:35.665419] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.848 [2024-07-25 18:55:35.665452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.848 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.106 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.106 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:24.106 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:24.106 true 00:22:24.363 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.363 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:24.363 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:24.363 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:24.363 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:24.622 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:24.622 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:24.881 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:24.881 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:24.881 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:25.140 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:25.140 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:25.399 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:25.399 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:25.399 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:25.399 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:25.658 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:25.658 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:25.658 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:25.915 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:25.915 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:26.172 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:26.172 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:26.172 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:26.428 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:26.428 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:26.687 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ErXYX3RFJI 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.uMbSgxlyUD 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ErXYX3RFJI 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.uMbSgxlyUD 00:22:26.946 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:27.205 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:27.463 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ErXYX3RFJI 00:22:27.463 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ErXYX3RFJI 00:22:27.463 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:27.720 [2024-07-25 18:55:39.508190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.720 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:27.977 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:28.237 [2024-07-25 18:55:40.081765] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.237 [2024-07-25 18:55:40.082077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.237 18:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:28.836 malloc0 00:22:28.836 18:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:28.836 18:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ErXYX3RFJI 00:22:29.094 [2024-07-25 18:55:40.915925] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:29.094 18:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ErXYX3RFJI 00:22:29.094 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.304 Initializing NVMe Controllers 00:22:41.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:41.304 Initialization complete. Launching workers. 00:22:41.304 ======================================================== 00:22:41.304 Latency(us) 00:22:41.304 Device Information : IOPS MiB/s Average min max 00:22:41.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7805.86 30.49 8201.69 1143.08 9531.51 00:22:41.304 ======================================================== 00:22:41.304 Total : 7805.86 30.49 8201.69 1143.08 9531.51 00:22:41.304 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ErXYX3RFJI 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ErXYX3RFJI' 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3571668 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3571668 /var/tmp/bdevperf.sock 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3571668 ']' 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.304 [2024-07-25 18:55:51.080688] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:41.304 [2024-07-25 18:55:51.080776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3571668 ] 00:22:41.304 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.304 [2024-07-25 18:55:51.136983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.304 [2024-07-25 18:55:51.220095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ErXYX3RFJI 00:22:41.304 [2024-07-25 18:55:51.613097] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.304 [2024-07-25 18:55:51.613233] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:41.304 TLSTESTn1 00:22:41.304 18:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:41.304 Running I/O for 10 seconds... 00:22:51.291 00:22:51.291 Latency(us) 00:22:51.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.291 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:51.291 Verification LBA range: start 0x0 length 0x2000 00:22:51.291 TLSTESTn1 : 10.02 3496.49 13.66 0.00 0.00 36549.06 7524.50 51652.08 00:22:51.291 =================================================================================================================== 00:22:51.291 Total : 3496.49 13.66 0.00 0.00 36549.06 7524.50 51652.08 00:22:51.291 0 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3571668 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3571668 ']' 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3571668 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3571668 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3571668' 00:22:51.291 killing process with pid 3571668 00:22:51.291 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3571668 00:22:51.291 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.291 00:22:51.292 Latency(us) 00:22:51.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.292 =================================================================================================================== 00:22:51.292 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:51.292 [2024-07-25 18:56:01.902677] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:51.292 18:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3571668 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uMbSgxlyUD 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uMbSgxlyUD 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uMbSgxlyUD 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uMbSgxlyUD' 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3572977 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3572977 /var/tmp/bdevperf.sock 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3572977 ']' 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.292 [2024-07-25 18:56:02.173361] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:51.292 [2024-07-25 18:56:02.173437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3572977 ] 00:22:51.292 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.292 [2024-07-25 18:56:02.229776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.292 [2024-07-25 18:56:02.310870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uMbSgxlyUD 00:22:51.292 [2024-07-25 18:56:02.649569] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.292 [2024-07-25 18:56:02.649703] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:51.292 [2024-07-25 18:56:02.659054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:51.292 [2024-07-25 18:56:02.659878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe70840 (107): Transport endpoint is not connected 00:22:51.292 [2024-07-25 18:56:02.660870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe70840 (9): Bad file descriptor 00:22:51.292 [2024-07-25 18:56:02.661869] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.292 [2024-07-25 18:56:02.661888] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:51.292 [2024-07-25 18:56:02.661921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.292 request: 00:22:51.292 { 00:22:51.292 "name": "TLSTEST", 00:22:51.292 "trtype": "tcp", 00:22:51.292 "traddr": "10.0.0.2", 00:22:51.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.292 "adrfam": "ipv4", 00:22:51.292 "trsvcid": "4420", 00:22:51.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.292 "psk": "/tmp/tmp.uMbSgxlyUD", 00:22:51.292 "method": "bdev_nvme_attach_controller", 00:22:51.292 "req_id": 1 00:22:51.292 } 00:22:51.292 Got JSON-RPC error response 00:22:51.292 response: 00:22:51.292 { 00:22:51.292 "code": -5, 00:22:51.292 "message": "Input/output error" 00:22:51.292 } 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3572977 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3572977 ']' 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3572977 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3572977 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3572977' 00:22:51.292 killing process with pid 3572977 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3572977 00:22:51.292 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.292 00:22:51.292 Latency(us) 00:22:51.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.292 =================================================================================================================== 00:22:51.292 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.292 [2024-07-25 18:56:02.712900] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3572977 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ErXYX3RFJI 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ErXYX3RFJI 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ErXYX3RFJI 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ErXYX3RFJI' 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3573110 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.292 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.293 18:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3573110 /var/tmp/bdevperf.sock 00:22:51.293 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3573110 ']' 00:22:51.293 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.293 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.293 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.293 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.293 18:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.293 [2024-07-25 18:56:02.974717] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:51.293 [2024-07-25 18:56:02.974803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573110 ] 00:22:51.293 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.293 [2024-07-25 18:56:03.040621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.293 [2024-07-25 18:56:03.134323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.551 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.551 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:51.551 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ErXYX3RFJI 00:22:51.810 [2024-07-25 18:56:03.520596] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.810 [2024-07-25 18:56:03.520725] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:51.810 [2024-07-25 18:56:03.531177] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:51.810 [2024-07-25 18:56:03.531208] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:51.810 [2024-07-25 18:56:03.531262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:51.810 [2024-07-25 18:56:03.531641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1878840 (107): Transport endpoint is not connected 00:22:51.810 [2024-07-25 18:56:03.532617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1878840 (9): Bad file descriptor 00:22:51.810 [2024-07-25 18:56:03.533616] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.810 [2024-07-25 18:56:03.533635] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:51.810 [2024-07-25 18:56:03.533667] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.810 request: 00:22:51.810 { 00:22:51.810 "name": "TLSTEST", 00:22:51.810 "trtype": "tcp", 00:22:51.810 "traddr": "10.0.0.2", 00:22:51.810 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.810 "adrfam": "ipv4", 00:22:51.810 "trsvcid": "4420", 00:22:51.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.810 "psk": "/tmp/tmp.ErXYX3RFJI", 00:22:51.810 "method": "bdev_nvme_attach_controller", 00:22:51.810 "req_id": 1 00:22:51.810 } 00:22:51.810 Got JSON-RPC error response 00:22:51.810 response: 00:22:51.810 { 00:22:51.810 "code": -5, 00:22:51.810 "message": "Input/output error" 00:22:51.810 } 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3573110 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3573110 ']' 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3573110 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3573110 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3573110' 00:22:51.810 killing process with pid 3573110 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3573110 00:22:51.810 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.810 00:22:51.810 Latency(us) 00:22:51.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.810 =================================================================================================================== 00:22:51.810 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.810 [2024-07-25 18:56:03.583700] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:51.810 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3573110 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ErXYX3RFJI 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ErXYX3RFJI 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ErXYX3RFJI 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ErXYX3RFJI' 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3573138 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3573138 /var/tmp/bdevperf.sock 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3573138 ']' 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.068 18:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.068 [2024-07-25 18:56:03.835915] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:52.068 [2024-07-25 18:56:03.836011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573138 ] 00:22:52.068 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.068 [2024-07-25 18:56:03.893305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.325 [2024-07-25 18:56:03.977643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.325 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.325 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:52.325 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ErXYX3RFJI 00:22:52.582 [2024-07-25 18:56:04.358183] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.582 [2024-07-25 18:56:04.358295] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:52.582 [2024-07-25 18:56:04.363367] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:52.582 [2024-07-25 18:56:04.363405] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:52.582 [2024-07-25 18:56:04.363457] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:52.582 [2024-07-25 18:56:04.364012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1669840 (107): Transport endpoint is not connected 00:22:52.582 [2024-07-25 18:56:04.365001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1669840 (9): Bad file descriptor 00:22:52.582 [2024-07-25 18:56:04.366000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:52.582 [2024-07-25 18:56:04.366019] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:52.582 [2024-07-25 18:56:04.366049] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:52.582 request: 00:22:52.582 { 00:22:52.582 "name": "TLSTEST", 00:22:52.582 "trtype": "tcp", 00:22:52.582 "traddr": "10.0.0.2", 00:22:52.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.582 "adrfam": "ipv4", 00:22:52.582 "trsvcid": "4420", 00:22:52.582 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.582 "psk": "/tmp/tmp.ErXYX3RFJI", 00:22:52.582 "method": "bdev_nvme_attach_controller", 00:22:52.582 "req_id": 1 00:22:52.582 } 00:22:52.582 Got JSON-RPC error response 00:22:52.582 response: 00:22:52.582 { 00:22:52.582 "code": -5, 00:22:52.582 "message": "Input/output error" 00:22:52.582 } 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3573138 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3573138 ']' 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3573138 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3573138 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3573138' 00:22:52.582 killing process with pid 3573138 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3573138 00:22:52.582 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.582 00:22:52.582 Latency(us) 00:22:52.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.582 =================================================================================================================== 00:22:52.582 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.582 [2024-07-25 18:56:04.405580] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.582 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3573138 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3573268 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3573268 /var/tmp/bdevperf.sock 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3573268 ']' 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.840 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.840 [2024-07-25 18:56:04.634658] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:52.840 [2024-07-25 18:56:04.634728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573268 ] 00:22:52.840 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.840 [2024-07-25 18:56:04.692197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.098 [2024-07-25 18:56:04.777213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.098 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.098 18:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:53.098 18:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:53.358 [2024-07-25 18:56:05.116316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:53.358 [2024-07-25 18:56:05.118198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a68f10 (9): Bad file descriptor 00:22:53.358 [2024-07-25 18:56:05.119194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.358 [2024-07-25 18:56:05.119214] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:53.358 [2024-07-25 18:56:05.119246] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.358 request: 00:22:53.358 { 00:22:53.358 "name": "TLSTEST", 00:22:53.358 "trtype": "tcp", 00:22:53.358 "traddr": "10.0.0.2", 00:22:53.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.358 "adrfam": "ipv4", 00:22:53.358 "trsvcid": "4420", 00:22:53.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.358 "method": "bdev_nvme_attach_controller", 00:22:53.358 "req_id": 1 00:22:53.358 } 00:22:53.358 Got JSON-RPC error response 00:22:53.358 response: 00:22:53.358 { 00:22:53.358 "code": -5, 00:22:53.358 "message": "Input/output error" 00:22:53.358 } 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3573268 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3573268 ']' 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3573268 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3573268 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3573268' 00:22:53.358 killing process with pid 3573268 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3573268 00:22:53.358 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.358 00:22:53.358 Latency(us) 00:22:53.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.358 =================================================================================================================== 00:22:53.358 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.358 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3573268 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3569886 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3569886 ']' 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3569886 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3569886 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:53.618 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3569886' 00:22:53.618 killing process with pid 3569886 00:22:53.619 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3569886 00:22:53.619 [2024-07-25 18:56:05.415171] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:53.619 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3569886 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.QrbfHRdWSl 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.QrbfHRdWSl 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3573414 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3573414 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3573414 ']' 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.878 18:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.137 [2024-07-25 18:56:05.777796] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:54.137 [2024-07-25 18:56:05.777899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.137 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.137 [2024-07-25 18:56:05.848489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.137 [2024-07-25 18:56:05.936304] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.137 [2024-07-25 18:56:05.936384] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.137 [2024-07-25 18:56:05.936401] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.137 [2024-07-25 18:56:05.936416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.137 [2024-07-25 18:56:05.936427] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.137 [2024-07-25 18:56:05.936459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.395 18:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.395 18:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:54.395 18:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.395 18:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.396 18:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.396 18:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.396 18:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.QrbfHRdWSl 00:22:54.396 18:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QrbfHRdWSl 00:22:54.396 18:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:54.654 [2024-07-25 18:56:06.309117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.654 18:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:54.911 18:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.169 [2024-07-25 18:56:06.790453] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.169 [2024-07-25 18:56:06.790733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.169 18:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.428 malloc0 00:22:55.428 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QrbfHRdWSl 00:22:55.689 [2024-07-25 18:56:07.531971] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QrbfHRdWSl 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QrbfHRdWSl' 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3573698 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3573698 /var/tmp/bdevperf.sock 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3573698 ']' 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:55.689 18:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.949 [2024-07-25 18:56:07.593268] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:55.949 [2024-07-25 18:56:07.593337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573698 ] 00:22:55.949 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.949 [2024-07-25 18:56:07.651219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.949 [2024-07-25 18:56:07.734109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.209 18:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:56.209 18:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:56.209 18:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QrbfHRdWSl 00:22:56.209 [2024-07-25 18:56:08.060695] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.209 [2024-07-25 18:56:08.060824] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:56.468 TLSTESTn1 00:22:56.468 18:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.468 Running I/O for 10 seconds... 00:23:06.445 00:23:06.445 Latency(us) 00:23:06.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.445 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:06.445 Verification LBA range: start 0x0 length 0x2000 00:23:06.445 TLSTESTn1 : 10.02 3528.74 13.78 0.00 0.00 36212.95 9466.31 33593.27 00:23:06.445 =================================================================================================================== 00:23:06.445 Total : 3528.74 13.78 0.00 0.00 36212.95 9466.31 33593.27 00:23:06.445 0 00:23:06.445 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.445 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3573698 00:23:06.445 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3573698 ']' 00:23:06.445 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3573698 00:23:06.445 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:06.445 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3573698 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3573698' 00:23:06.703 killing process with pid 3573698 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3573698 00:23:06.703 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.703 00:23:06.703 Latency(us) 00:23:06.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.703 =================================================================================================================== 00:23:06.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.703 [2024-07-25 18:56:18.349715] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3573698 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.QrbfHRdWSl 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QrbfHRdWSl 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QrbfHRdWSl 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QrbfHRdWSl 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QrbfHRdWSl' 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3574897 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3574897 /var/tmp/bdevperf.sock 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3574897 ']' 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:06.703 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.961 [2024-07-25 18:56:18.621666] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:06.961 [2024-07-25 18:56:18.621739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574897 ] 00:23:06.961 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.961 [2024-07-25 18:56:18.683653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.961 [2024-07-25 18:56:18.769016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.218 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.218 18:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:07.218 18:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QrbfHRdWSl 00:23:07.218 [2024-07-25 18:56:19.091887] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.218 [2024-07-25 18:56:19.091989] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:07.218 [2024-07-25 18:56:19.092016] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.QrbfHRdWSl 00:23:07.476 request: 00:23:07.476 { 00:23:07.476 "name": "TLSTEST", 00:23:07.476 "trtype": "tcp", 00:23:07.476 "traddr": "10.0.0.2", 00:23:07.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.476 "adrfam": "ipv4", 00:23:07.476 "trsvcid": "4420", 00:23:07.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.476 "psk": "/tmp/tmp.QrbfHRdWSl", 00:23:07.476 "method": "bdev_nvme_attach_controller", 00:23:07.476 "req_id": 1 00:23:07.476 } 00:23:07.476 Got JSON-RPC error response 00:23:07.476 response: 00:23:07.476 { 00:23:07.476 "code": -1, 00:23:07.476 "message": "Operation not permitted" 00:23:07.476 } 00:23:07.476 18:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3574897 00:23:07.476 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3574897 ']' 00:23:07.476 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3574897 00:23:07.476 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:07.476 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:07.476 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3574897 00:23:07.477 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:07.477 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:07.477 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3574897' 00:23:07.477 killing process with pid 3574897 00:23:07.477 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3574897 00:23:07.477 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.477 00:23:07.477 Latency(us) 00:23:07.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.477 =================================================================================================================== 00:23:07.477 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.477 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3574897 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3573414 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3573414 ']' 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3573414 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3573414 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3573414' 00:23:07.735 killing process with pid 3573414 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3573414 00:23:07.735 [2024-07-25 18:56:19.388705] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:07.735 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3573414 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3575037 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3575037 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3575037 ']' 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.993 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.993 [2024-07-25 18:56:19.679310] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:07.993 [2024-07-25 18:56:19.679397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.993 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.993 [2024-07-25 18:56:19.747027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.993 [2024-07-25 18:56:19.841268] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.993 [2024-07-25 18:56:19.841323] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.993 [2024-07-25 18:56:19.841354] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.993 [2024-07-25 18:56:19.841368] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.993 [2024-07-25 18:56:19.841380] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.993 [2024-07-25 18:56:19.841416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.QrbfHRdWSl 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.QrbfHRdWSl 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.QrbfHRdWSl 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QrbfHRdWSl 00:23:08.285 18:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.543 [2024-07-25 18:56:20.215069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.543 18:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:08.800 18:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:09.058 [2024-07-25 18:56:20.788563] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.058 [2024-07-25 18:56:20.788791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.058 18:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:09.316 malloc0 00:23:09.316 18:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:09.574 18:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QrbfHRdWSl 00:23:09.833 [2024-07-25 18:56:21.546439] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:09.833 [2024-07-25 18:56:21.546488] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:09.833 [2024-07-25 18:56:21.546524] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:09.833 request: 00:23:09.833 { 00:23:09.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.833 "host": "nqn.2016-06.io.spdk:host1", 00:23:09.833 "psk": "/tmp/tmp.QrbfHRdWSl", 00:23:09.833 "method": "nvmf_subsystem_add_host", 00:23:09.833 "req_id": 1 00:23:09.833 } 00:23:09.833 Got JSON-RPC error response 00:23:09.833 response: 00:23:09.833 { 00:23:09.833 "code": -32603, 00:23:09.833 "message": "Internal error" 00:23:09.833 } 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3575037 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3575037 ']' 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3575037 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575037 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575037' 00:23:09.833 killing process with pid 3575037 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3575037 00:23:09.833 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3575037 00:23:10.092 18:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.QrbfHRdWSl 00:23:10.092 18:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:10.092 18:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.092 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:10.092 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.092 18:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3575332 00:23:10.092 18:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:10.092 18:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3575332 00:23:10.093 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3575332 ']' 00:23:10.093 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.093 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:10.093 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.093 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:10.093 18:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.093 [2024-07-25 18:56:21.912568] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:10.093 [2024-07-25 18:56:21.912653] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.093 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.351 [2024-07-25 18:56:21.976770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.351 [2024-07-25 18:56:22.071086] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.351 [2024-07-25 18:56:22.071149] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.351 [2024-07-25 18:56:22.071163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.351 [2024-07-25 18:56:22.071176] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.351 [2024-07-25 18:56:22.071186] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.351 [2024-07-25 18:56:22.071222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.QrbfHRdWSl 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QrbfHRdWSl 00:23:10.351 18:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.609 [2024-07-25 18:56:22.486307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.867 18:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:11.125 18:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.125 [2024-07-25 18:56:22.971631] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.125 [2024-07-25 18:56:22.971880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.125 18:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.388 malloc0 00:23:11.388 18:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.644 18:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QrbfHRdWSl 00:23:11.902 [2024-07-25 18:56:23.720983] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3575614 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3575614 /var/tmp/bdevperf.sock 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3575614 ']' 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.902 18:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.159 [2024-07-25 18:56:23.782616] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:12.159 [2024-07-25 18:56:23.782683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3575614 ] 00:23:12.159 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.160 [2024-07-25 18:56:23.838624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.160 [2024-07-25 18:56:23.921419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.160 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.160 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.160 18:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QrbfHRdWSl 00:23:12.417 [2024-07-25 18:56:24.251773] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.417 [2024-07-25 18:56:24.251899] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:12.674 TLSTESTn1 00:23:12.674 18:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:12.931 18:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:12.931 "subsystems": [ 00:23:12.931 { 00:23:12.931 "subsystem": "keyring", 00:23:12.931 "config": [] 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "subsystem": "iobuf", 00:23:12.931 "config": [ 00:23:12.931 { 00:23:12.931 "method": "iobuf_set_options", 00:23:12.931 "params": { 00:23:12.931 "small_pool_count": 8192, 00:23:12.931 "large_pool_count": 1024, 00:23:12.931 "small_bufsize": 8192, 00:23:12.931 "large_bufsize": 135168 00:23:12.931 } 00:23:12.931 } 00:23:12.931 ] 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "subsystem": "sock", 00:23:12.931 "config": [ 00:23:12.931 { 00:23:12.931 "method": "sock_set_default_impl", 00:23:12.931 "params": { 00:23:12.931 "impl_name": "posix" 00:23:12.931 } 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "method": "sock_impl_set_options", 00:23:12.931 "params": { 00:23:12.931 "impl_name": "ssl", 00:23:12.931 "recv_buf_size": 4096, 00:23:12.931 "send_buf_size": 4096, 00:23:12.931 "enable_recv_pipe": true, 00:23:12.931 "enable_quickack": false, 00:23:12.931 "enable_placement_id": 0, 00:23:12.931 "enable_zerocopy_send_server": true, 00:23:12.931 "enable_zerocopy_send_client": false, 00:23:12.931 "zerocopy_threshold": 0, 00:23:12.931 "tls_version": 0, 00:23:12.931 "enable_ktls": false 00:23:12.931 } 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "method": "sock_impl_set_options", 00:23:12.931 "params": { 00:23:12.931 "impl_name": "posix", 00:23:12.931 "recv_buf_size": 2097152, 00:23:12.931 "send_buf_size": 2097152, 00:23:12.931 "enable_recv_pipe": true, 00:23:12.931 "enable_quickack": false, 00:23:12.931 "enable_placement_id": 0, 00:23:12.931 "enable_zerocopy_send_server": true, 00:23:12.931 "enable_zerocopy_send_client": false, 00:23:12.931 "zerocopy_threshold": 0, 00:23:12.931 "tls_version": 0, 00:23:12.931 "enable_ktls": false 00:23:12.931 } 00:23:12.931 } 00:23:12.931 ] 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "subsystem": "vmd", 00:23:12.931 "config": [] 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "subsystem": "accel", 00:23:12.931 "config": [ 00:23:12.931 { 00:23:12.931 "method": "accel_set_options", 00:23:12.931 "params": { 00:23:12.931 "small_cache_size": 128, 00:23:12.931 "large_cache_size": 16, 00:23:12.931 "task_count": 2048, 00:23:12.931 "sequence_count": 2048, 00:23:12.931 "buf_count": 2048 00:23:12.931 } 00:23:12.931 } 00:23:12.931 ] 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "subsystem": "bdev", 00:23:12.931 "config": [ 00:23:12.931 { 00:23:12.931 "method": "bdev_set_options", 00:23:12.931 "params": { 00:23:12.931 "bdev_io_pool_size": 65535, 00:23:12.931 "bdev_io_cache_size": 256, 00:23:12.931 "bdev_auto_examine": true, 00:23:12.931 "iobuf_small_cache_size": 128, 00:23:12.931 "iobuf_large_cache_size": 16 00:23:12.931 } 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "method": "bdev_raid_set_options", 00:23:12.931 "params": { 00:23:12.931 "process_window_size_kb": 1024 00:23:12.931 } 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "method": "bdev_iscsi_set_options", 00:23:12.931 "params": { 00:23:12.931 "timeout_sec": 30 00:23:12.931 } 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "method": "bdev_nvme_set_options", 00:23:12.931 "params": { 00:23:12.931 "action_on_timeout": "none", 00:23:12.931 "timeout_us": 0, 00:23:12.931 "timeout_admin_us": 0, 00:23:12.931 "keep_alive_timeout_ms": 10000, 00:23:12.931 "arbitration_burst": 0, 00:23:12.931 "low_priority_weight": 0, 00:23:12.931 "medium_priority_weight": 0, 00:23:12.931 "high_priority_weight": 0, 00:23:12.931 "nvme_adminq_poll_period_us": 10000, 00:23:12.931 "nvme_ioq_poll_period_us": 0, 00:23:12.931 "io_queue_requests": 0, 00:23:12.931 "delay_cmd_submit": true, 00:23:12.931 "transport_retry_count": 4, 00:23:12.931 "bdev_retry_count": 3, 00:23:12.931 "transport_ack_timeout": 0, 00:23:12.931 "ctrlr_loss_timeout_sec": 0, 00:23:12.931 "reconnect_delay_sec": 0, 00:23:12.931 "fast_io_fail_timeout_sec": 0, 00:23:12.931 "disable_auto_failback": false, 00:23:12.931 "generate_uuids": false, 00:23:12.931 "transport_tos": 0, 00:23:12.931 "nvme_error_stat": false, 00:23:12.931 "rdma_srq_size": 0, 00:23:12.931 "io_path_stat": false, 00:23:12.931 "allow_accel_sequence": false, 00:23:12.931 "rdma_max_cq_size": 0, 00:23:12.931 "rdma_cm_event_timeout_ms": 0, 00:23:12.931 "dhchap_digests": [ 00:23:12.931 "sha256", 00:23:12.931 "sha384", 00:23:12.931 "sha512" 00:23:12.931 ], 00:23:12.931 "dhchap_dhgroups": [ 00:23:12.931 "null", 00:23:12.931 "ffdhe2048", 00:23:12.931 "ffdhe3072", 00:23:12.931 "ffdhe4096", 00:23:12.931 "ffdhe6144", 00:23:12.931 "ffdhe8192" 00:23:12.931 ] 00:23:12.931 } 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "method": "bdev_nvme_set_hotplug", 00:23:12.931 "params": { 00:23:12.931 "period_us": 100000, 00:23:12.931 "enable": false 00:23:12.931 } 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "method": "bdev_malloc_create", 00:23:12.931 "params": { 00:23:12.931 "name": "malloc0", 00:23:12.931 "num_blocks": 8192, 00:23:12.931 "block_size": 4096, 00:23:12.931 "physical_block_size": 4096, 00:23:12.931 "uuid": "6f89c4bc-8710-4c4a-a2a2-9dcd6a1f9344", 00:23:12.931 "optimal_io_boundary": 0 00:23:12.931 } 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "method": "bdev_wait_for_examine" 00:23:12.931 } 00:23:12.931 ] 00:23:12.931 }, 00:23:12.931 { 00:23:12.931 "subsystem": "nbd", 00:23:12.932 "config": [] 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "subsystem": "scheduler", 00:23:12.932 "config": [ 00:23:12.932 { 00:23:12.932 "method": "framework_set_scheduler", 00:23:12.932 "params": { 00:23:12.932 "name": "static" 00:23:12.932 } 00:23:12.932 } 00:23:12.932 ] 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "subsystem": "nvmf", 00:23:12.932 "config": [ 00:23:12.932 { 00:23:12.932 "method": "nvmf_set_config", 00:23:12.932 "params": { 00:23:12.932 "discovery_filter": "match_any", 00:23:12.932 "admin_cmd_passthru": { 00:23:12.932 "identify_ctrlr": false 00:23:12.932 } 00:23:12.932 } 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "method": "nvmf_set_max_subsystems", 00:23:12.932 "params": { 00:23:12.932 "max_subsystems": 1024 00:23:12.932 } 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "method": "nvmf_set_crdt", 00:23:12.932 "params": { 00:23:12.932 "crdt1": 0, 00:23:12.932 "crdt2": 0, 00:23:12.932 "crdt3": 0 00:23:12.932 } 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "method": "nvmf_create_transport", 00:23:12.932 "params": { 00:23:12.932 "trtype": "TCP", 00:23:12.932 "max_queue_depth": 128, 00:23:12.932 "max_io_qpairs_per_ctrlr": 127, 00:23:12.932 "in_capsule_data_size": 4096, 00:23:12.932 "max_io_size": 131072, 00:23:12.932 "io_unit_size": 131072, 00:23:12.932 "max_aq_depth": 128, 00:23:12.932 "num_shared_buffers": 511, 00:23:12.932 "buf_cache_size": 4294967295, 00:23:12.932 "dif_insert_or_strip": false, 00:23:12.932 "zcopy": false, 00:23:12.932 "c2h_success": false, 00:23:12.932 "sock_priority": 0, 00:23:12.932 "abort_timeout_sec": 1, 00:23:12.932 "ack_timeout": 0, 00:23:12.932 "data_wr_pool_size": 0 00:23:12.932 } 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "method": "nvmf_create_subsystem", 00:23:12.932 "params": { 00:23:12.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.932 "allow_any_host": false, 00:23:12.932 "serial_number": "SPDK00000000000001", 00:23:12.932 "model_number": "SPDK bdev Controller", 00:23:12.932 "max_namespaces": 10, 00:23:12.932 "min_cntlid": 1, 00:23:12.932 "max_cntlid": 65519, 00:23:12.932 "ana_reporting": false 00:23:12.932 } 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "method": "nvmf_subsystem_add_host", 00:23:12.932 "params": { 00:23:12.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.932 "host": "nqn.2016-06.io.spdk:host1", 00:23:12.932 "psk": "/tmp/tmp.QrbfHRdWSl" 00:23:12.932 } 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "method": "nvmf_subsystem_add_ns", 00:23:12.932 "params": { 00:23:12.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.932 "namespace": { 00:23:12.932 "nsid": 1, 00:23:12.932 "bdev_name": "malloc0", 00:23:12.932 "nguid": "6F89C4BC87104C4AA2A29DCD6A1F9344", 00:23:12.932 "uuid": "6f89c4bc-8710-4c4a-a2a2-9dcd6a1f9344", 00:23:12.932 "no_auto_visible": false 00:23:12.932 } 00:23:12.932 } 00:23:12.932 }, 00:23:12.932 { 00:23:12.932 "method": "nvmf_subsystem_add_listener", 00:23:12.932 "params": { 00:23:12.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.932 "listen_address": { 00:23:12.932 "trtype": "TCP", 00:23:12.932 "adrfam": "IPv4", 00:23:12.932 "traddr": "10.0.0.2", 00:23:12.932 "trsvcid": "4420" 00:23:12.932 }, 00:23:12.932 "secure_channel": true 00:23:12.932 } 00:23:12.932 } 00:23:12.932 ] 00:23:12.932 } 00:23:12.932 ] 00:23:12.932 }' 00:23:12.932 18:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:13.191 18:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:13.191 "subsystems": [ 00:23:13.191 { 00:23:13.191 "subsystem": "keyring", 00:23:13.191 "config": [] 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "subsystem": "iobuf", 00:23:13.191 "config": [ 00:23:13.191 { 00:23:13.191 "method": "iobuf_set_options", 00:23:13.191 "params": { 00:23:13.191 "small_pool_count": 8192, 00:23:13.191 "large_pool_count": 1024, 00:23:13.191 "small_bufsize": 8192, 00:23:13.191 "large_bufsize": 135168 00:23:13.191 } 00:23:13.191 } 00:23:13.191 ] 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "subsystem": "sock", 00:23:13.191 "config": [ 00:23:13.191 { 00:23:13.191 "method": "sock_set_default_impl", 00:23:13.191 "params": { 00:23:13.191 "impl_name": "posix" 00:23:13.191 } 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "method": "sock_impl_set_options", 00:23:13.191 "params": { 00:23:13.191 "impl_name": "ssl", 00:23:13.191 "recv_buf_size": 4096, 00:23:13.191 "send_buf_size": 4096, 00:23:13.191 "enable_recv_pipe": true, 00:23:13.191 "enable_quickack": false, 00:23:13.191 "enable_placement_id": 0, 00:23:13.191 "enable_zerocopy_send_server": true, 00:23:13.191 "enable_zerocopy_send_client": false, 00:23:13.191 "zerocopy_threshold": 0, 00:23:13.191 "tls_version": 0, 00:23:13.191 "enable_ktls": false 00:23:13.191 } 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "method": "sock_impl_set_options", 00:23:13.191 "params": { 00:23:13.191 "impl_name": "posix", 00:23:13.191 "recv_buf_size": 2097152, 00:23:13.191 "send_buf_size": 2097152, 00:23:13.191 "enable_recv_pipe": true, 00:23:13.191 "enable_quickack": false, 00:23:13.191 "enable_placement_id": 0, 00:23:13.191 "enable_zerocopy_send_server": true, 00:23:13.191 "enable_zerocopy_send_client": false, 00:23:13.191 "zerocopy_threshold": 0, 00:23:13.191 "tls_version": 0, 00:23:13.191 "enable_ktls": false 00:23:13.191 } 00:23:13.191 } 00:23:13.191 ] 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "subsystem": "vmd", 00:23:13.191 "config": [] 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "subsystem": "accel", 00:23:13.191 "config": [ 00:23:13.191 { 00:23:13.191 "method": "accel_set_options", 00:23:13.191 "params": { 00:23:13.191 "small_cache_size": 128, 00:23:13.191 "large_cache_size": 16, 00:23:13.191 "task_count": 2048, 00:23:13.191 "sequence_count": 2048, 00:23:13.191 "buf_count": 2048 00:23:13.191 } 00:23:13.191 } 00:23:13.191 ] 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "subsystem": "bdev", 00:23:13.191 "config": [ 00:23:13.191 { 00:23:13.191 "method": "bdev_set_options", 00:23:13.191 "params": { 00:23:13.191 "bdev_io_pool_size": 65535, 00:23:13.191 "bdev_io_cache_size": 256, 00:23:13.191 "bdev_auto_examine": true, 00:23:13.191 "iobuf_small_cache_size": 128, 00:23:13.191 "iobuf_large_cache_size": 16 00:23:13.191 } 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "method": "bdev_raid_set_options", 00:23:13.191 "params": { 00:23:13.191 "process_window_size_kb": 1024 00:23:13.191 } 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "method": "bdev_iscsi_set_options", 00:23:13.191 "params": { 00:23:13.191 "timeout_sec": 30 00:23:13.191 } 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "method": "bdev_nvme_set_options", 00:23:13.191 "params": { 00:23:13.191 "action_on_timeout": "none", 00:23:13.191 "timeout_us": 0, 00:23:13.191 "timeout_admin_us": 0, 00:23:13.191 "keep_alive_timeout_ms": 10000, 00:23:13.191 "arbitration_burst": 0, 00:23:13.191 "low_priority_weight": 0, 00:23:13.191 "medium_priority_weight": 0, 00:23:13.191 "high_priority_weight": 0, 00:23:13.191 "nvme_adminq_poll_period_us": 10000, 00:23:13.191 "nvme_ioq_poll_period_us": 0, 00:23:13.191 "io_queue_requests": 512, 00:23:13.191 "delay_cmd_submit": true, 00:23:13.191 "transport_retry_count": 4, 00:23:13.191 "bdev_retry_count": 3, 00:23:13.191 "transport_ack_timeout": 0, 00:23:13.191 "ctrlr_loss_timeout_sec": 0, 00:23:13.191 "reconnect_delay_sec": 0, 00:23:13.191 "fast_io_fail_timeout_sec": 0, 00:23:13.191 "disable_auto_failback": false, 00:23:13.191 "generate_uuids": false, 00:23:13.191 "transport_tos": 0, 00:23:13.191 "nvme_error_stat": false, 00:23:13.191 "rdma_srq_size": 0, 00:23:13.191 "io_path_stat": false, 00:23:13.191 "allow_accel_sequence": false, 00:23:13.191 "rdma_max_cq_size": 0, 00:23:13.191 "rdma_cm_event_timeout_ms": 0, 00:23:13.191 "dhchap_digests": [ 00:23:13.191 "sha256", 00:23:13.191 "sha384", 00:23:13.191 "sha512" 00:23:13.191 ], 00:23:13.191 "dhchap_dhgroups": [ 00:23:13.191 "null", 00:23:13.191 "ffdhe2048", 00:23:13.191 "ffdhe3072", 00:23:13.191 "ffdhe4096", 00:23:13.191 "ffdhe6144", 00:23:13.191 "ffdhe8192" 00:23:13.191 ] 00:23:13.191 } 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "method": "bdev_nvme_attach_controller", 00:23:13.191 "params": { 00:23:13.191 "name": "TLSTEST", 00:23:13.191 "trtype": "TCP", 00:23:13.191 "adrfam": "IPv4", 00:23:13.191 "traddr": "10.0.0.2", 00:23:13.191 "trsvcid": "4420", 00:23:13.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.191 "prchk_reftag": false, 00:23:13.191 "prchk_guard": false, 00:23:13.191 "ctrlr_loss_timeout_sec": 0, 00:23:13.191 "reconnect_delay_sec": 0, 00:23:13.191 "fast_io_fail_timeout_sec": 0, 00:23:13.191 "psk": "/tmp/tmp.QrbfHRdWSl", 00:23:13.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.191 "hdgst": false, 00:23:13.191 "ddgst": false 00:23:13.191 } 00:23:13.191 }, 00:23:13.191 { 00:23:13.191 "method": "bdev_nvme_set_hotplug", 00:23:13.192 "params": { 00:23:13.192 "period_us": 100000, 00:23:13.192 "enable": false 00:23:13.192 } 00:23:13.192 }, 00:23:13.192 { 00:23:13.192 "method": "bdev_wait_for_examine" 00:23:13.192 } 00:23:13.192 ] 00:23:13.192 }, 00:23:13.192 { 00:23:13.192 "subsystem": "nbd", 00:23:13.192 "config": [] 00:23:13.192 } 00:23:13.192 ] 00:23:13.192 }' 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3575614 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3575614 ']' 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3575614 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575614 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575614' 00:23:13.192 killing process with pid 3575614 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3575614 00:23:13.192 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.192 00:23:13.192 Latency(us) 00:23:13.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.192 =================================================================================================================== 00:23:13.192 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.192 [2024-07-25 18:56:24.988925] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:13.192 18:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3575614 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3575332 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3575332 ']' 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3575332 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575332 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575332' 00:23:13.450 killing process with pid 3575332 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3575332 00:23:13.450 [2024-07-25 18:56:25.242277] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:13.450 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3575332 00:23:13.710 18:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:13.710 18:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.710 18:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:13.710 "subsystems": [ 00:23:13.710 { 00:23:13.710 "subsystem": "keyring", 00:23:13.710 "config": [] 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "subsystem": "iobuf", 00:23:13.710 "config": [ 00:23:13.710 { 00:23:13.710 "method": "iobuf_set_options", 00:23:13.710 "params": { 00:23:13.710 "small_pool_count": 8192, 00:23:13.710 "large_pool_count": 1024, 00:23:13.710 "small_bufsize": 8192, 00:23:13.710 "large_bufsize": 135168 00:23:13.710 } 00:23:13.710 } 00:23:13.710 ] 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "subsystem": "sock", 00:23:13.710 "config": [ 00:23:13.710 { 00:23:13.710 "method": "sock_set_default_impl", 00:23:13.710 "params": { 00:23:13.710 "impl_name": "posix" 00:23:13.710 } 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "method": "sock_impl_set_options", 00:23:13.710 "params": { 00:23:13.710 "impl_name": "ssl", 00:23:13.710 "recv_buf_size": 4096, 00:23:13.710 "send_buf_size": 4096, 00:23:13.710 "enable_recv_pipe": true, 00:23:13.710 "enable_quickack": false, 00:23:13.710 "enable_placement_id": 0, 00:23:13.710 "enable_zerocopy_send_server": true, 00:23:13.710 "enable_zerocopy_send_client": false, 00:23:13.710 "zerocopy_threshold": 0, 00:23:13.710 "tls_version": 0, 00:23:13.710 "enable_ktls": false 00:23:13.710 } 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "method": "sock_impl_set_options", 00:23:13.710 "params": { 00:23:13.710 "impl_name": "posix", 00:23:13.710 "recv_buf_size": 2097152, 00:23:13.710 "send_buf_size": 2097152, 00:23:13.710 "enable_recv_pipe": true, 00:23:13.710 "enable_quickack": false, 00:23:13.710 "enable_placement_id": 0, 00:23:13.710 "enable_zerocopy_send_server": true, 00:23:13.710 "enable_zerocopy_send_client": false, 00:23:13.710 "zerocopy_threshold": 0, 00:23:13.710 "tls_version": 0, 00:23:13.710 "enable_ktls": false 00:23:13.710 } 00:23:13.710 } 00:23:13.710 ] 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "subsystem": "vmd", 00:23:13.710 "config": [] 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "subsystem": "accel", 00:23:13.710 "config": [ 00:23:13.710 { 00:23:13.710 "method": "accel_set_options", 00:23:13.710 "params": { 00:23:13.710 "small_cache_size": 128, 00:23:13.710 "large_cache_size": 16, 00:23:13.710 "task_count": 2048, 00:23:13.710 "sequence_count": 2048, 00:23:13.710 "buf_count": 2048 00:23:13.710 } 00:23:13.710 } 00:23:13.710 ] 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "subsystem": "bdev", 00:23:13.710 "config": [ 00:23:13.710 { 00:23:13.710 "method": "bdev_set_options", 00:23:13.710 "params": { 00:23:13.710 "bdev_io_pool_size": 65535, 00:23:13.710 "bdev_io_cache_size": 256, 00:23:13.710 "bdev_auto_examine": true, 00:23:13.710 "iobuf_small_cache_size": 128, 00:23:13.710 "iobuf_large_cache_size": 16 00:23:13.710 } 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "method": "bdev_raid_set_options", 00:23:13.710 "params": { 00:23:13.710 "process_window_size_kb": 1024 00:23:13.710 } 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "method": "bdev_iscsi_set_options", 00:23:13.710 "params": { 00:23:13.710 "timeout_sec": 30 00:23:13.710 } 00:23:13.710 }, 00:23:13.710 { 00:23:13.710 "method": "bdev_nvme_set_options", 00:23:13.710 "params": { 00:23:13.711 "action_on_timeout": "none", 00:23:13.711 "timeout_us": 0, 00:23:13.711 "timeout_admin_us": 0, 00:23:13.711 "keep_alive_timeout_ms": 10000, 00:23:13.711 "arbitration_burst": 0, 00:23:13.711 "low_priority_weight": 0, 00:23:13.711 "medium_priority_weight": 0, 00:23:13.711 "high_priority_weight": 0, 00:23:13.711 "nvme_adminq_poll_period_us": 10000, 00:23:13.711 "nvme_ioq_poll_period_us": 0, 00:23:13.711 "io_queue_requests": 0, 00:23:13.711 "delay_cmd_submit": true, 00:23:13.711 "transport_retry_count": 4, 00:23:13.711 "bdev_retry_count": 3, 00:23:13.711 "transport_ack_timeout": 0, 00:23:13.711 "ctrlr_loss_timeout_sec": 0, 00:23:13.711 "reconnect_delay_sec": 0, 00:23:13.711 "fast_io_fail_timeout_sec": 0, 00:23:13.711 "disable_auto_failback": false, 00:23:13.711 "generate_uuids": false, 00:23:13.711 "transport_tos": 0, 00:23:13.711 "nvme_error_stat": false, 00:23:13.711 "rdma_srq_size": 0, 00:23:13.711 "io_path_stat": false, 00:23:13.711 "allow_accel_sequence": false, 00:23:13.711 "rdma_max_cq_size": 0, 00:23:13.711 "rdma_cm_event_timeout_ms": 0, 00:23:13.711 "dhchap_digests": [ 00:23:13.711 "sha256", 00:23:13.711 "sha384", 00:23:13.711 "sha512" 00:23:13.711 ], 00:23:13.711 "dhchap_dhgroups": [ 00:23:13.711 "null", 00:23:13.711 "ffdhe2048", 00:23:13.711 "ffdhe3072", 00:23:13.711 "ffdhe4096", 00:23:13.711 "ffdhe6144", 00:23:13.711 "ffdhe8192" 00:23:13.711 ] 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "bdev_nvme_set_hotplug", 00:23:13.711 "params": { 00:23:13.711 "period_us": 100000, 00:23:13.711 "enable": false 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "bdev_malloc_create", 00:23:13.711 "params": { 00:23:13.711 "name": "malloc0", 00:23:13.711 "num_blocks": 8192, 00:23:13.711 "block_size": 4096, 00:23:13.711 "physical_block_size": 4096, 00:23:13.711 "uuid": "6f89c4bc-8710-4c4a-a2a2-9dcd6a1f9344", 00:23:13.711 "optimal_io_boundary": 0 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "bdev_wait_for_examine" 00:23:13.711 } 00:23:13.711 ] 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "subsystem": "nbd", 00:23:13.711 "config": [] 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "subsystem": "scheduler", 00:23:13.711 "config": [ 00:23:13.711 { 00:23:13.711 "method": "framework_set_scheduler", 00:23:13.711 "params": { 00:23:13.711 "name": "static" 00:23:13.711 } 00:23:13.711 } 00:23:13.711 ] 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "subsystem": "nvmf", 00:23:13.711 "config": [ 00:23:13.711 { 00:23:13.711 "method": "nvmf_set_config", 00:23:13.711 "params": { 00:23:13.711 "discovery_filter": "match_any", 00:23:13.711 "admin_cmd_passthru": { 00:23:13.711 "identify_ctrlr": false 00:23:13.711 } 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "nvmf_set_max_subsystems", 00:23:13.711 "params": { 00:23:13.711 "max_subsystems": 1024 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "nvmf_set_crdt", 00:23:13.711 "params": { 00:23:13.711 "crdt1": 0, 00:23:13.711 "crdt2": 0, 00:23:13.711 "crdt3": 0 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "nvmf_create_transport", 00:23:13.711 "params": { 00:23:13.711 "trtype": "TCP", 00:23:13.711 "max_queue_depth": 128, 00:23:13.711 "max_io_qpairs_per_ctrlr": 127, 00:23:13.711 "in_capsule_data_size": 4096, 00:23:13.711 "max_io_size": 131072, 00:23:13.711 "io_unit_size": 131072, 00:23:13.711 "max_aq_depth": 128, 00:23:13.711 "num_shared_buffers": 511, 00:23:13.711 "buf_cache_size": 4294967295, 00:23:13.711 "dif_insert_or_strip": false, 00:23:13.711 "zcopy": false, 00:23:13.711 "c2h_success": false, 00:23:13.711 "sock_priority": 0, 00:23:13.711 "abort_timeout_sec": 1, 00:23:13.711 "ack_timeout": 0, 00:23:13.711 "data_wr_pool_size": 0 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "nvmf_create_subsystem", 00:23:13.711 "params": { 00:23:13.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.711 "allow_any_host": false, 00:23:13.711 "serial_number": "SPDK00000000000001", 00:23:13.711 "model_number": "SPDK bdev Controller", 00:23:13.711 "max_namespaces": 10, 00:23:13.711 "min_cntlid": 1, 00:23:13.711 "max_cntlid": 65519, 00:23:13.711 "ana_reporting": false 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "nvmf_subsystem_add_host", 00:23:13.711 "params": { 00:23:13.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.711 "host": "nqn.2016-06.io.spdk:host1", 00:23:13.711 "psk": "/tmp/tmp.QrbfHRdWSl" 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "nvmf_subsystem_add_ns", 00:23:13.711 "params": { 00:23:13.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.711 "namespace": { 00:23:13.711 "nsid": 1, 00:23:13.711 "bdev_name": "malloc0", 00:23:13.711 "nguid": "6F89C4BC87104C4AA2A29DCD6A1F9344", 00:23:13.711 "uuid": "6f89c4bc-8710-4c4a-a2a2-9dcd6a1f9344", 00:23:13.711 "no_auto_visible": false 00:23:13.711 } 00:23:13.711 } 00:23:13.711 }, 00:23:13.711 { 00:23:13.711 "method": "nvmf_subsystem_add_listener", 00:23:13.711 "params": { 00:23:13.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.711 "listen_address": { 00:23:13.711 "trtype": "TCP", 00:23:13.711 "adrfam": "IPv4", 00:23:13.711 "traddr": "10.0.0.2", 00:23:13.711 "trsvcid": "4420" 00:23:13.711 }, 00:23:13.711 "secure_channel": true 00:23:13.711 } 00:23:13.711 } 00:23:13.711 ] 00:23:13.711 } 00:23:13.711 ] 00:23:13.711 }' 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3575891 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3575891 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3575891 ']' 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:13.711 18:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.711 [2024-07-25 18:56:25.537736] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:13.711 [2024-07-25 18:56:25.537806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.711 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.972 [2024-07-25 18:56:25.605407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.972 [2024-07-25 18:56:25.701788] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.972 [2024-07-25 18:56:25.701867] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.972 [2024-07-25 18:56:25.701881] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.972 [2024-07-25 18:56:25.701901] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.972 [2024-07-25 18:56:25.701926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.972 [2024-07-25 18:56:25.702023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.232 [2024-07-25 18:56:25.939178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.232 [2024-07-25 18:56:25.955142] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:14.232 [2024-07-25 18:56:25.971171] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.232 [2024-07-25 18:56:25.991295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3576040 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3576040 /var/tmp/bdevperf.sock 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3576040 ']' 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.799 18:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:14.799 "subsystems": [ 00:23:14.799 { 00:23:14.799 "subsystem": "keyring", 00:23:14.799 "config": [] 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "subsystem": "iobuf", 00:23:14.799 "config": [ 00:23:14.799 { 00:23:14.799 "method": "iobuf_set_options", 00:23:14.799 "params": { 00:23:14.799 "small_pool_count": 8192, 00:23:14.799 "large_pool_count": 1024, 00:23:14.799 "small_bufsize": 8192, 00:23:14.799 "large_bufsize": 135168 00:23:14.799 } 00:23:14.799 } 00:23:14.799 ] 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "subsystem": "sock", 00:23:14.799 "config": [ 00:23:14.799 { 00:23:14.799 "method": "sock_set_default_impl", 00:23:14.799 "params": { 00:23:14.799 "impl_name": "posix" 00:23:14.799 } 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "method": "sock_impl_set_options", 00:23:14.799 "params": { 00:23:14.799 "impl_name": "ssl", 00:23:14.799 "recv_buf_size": 4096, 00:23:14.799 "send_buf_size": 4096, 00:23:14.799 "enable_recv_pipe": true, 00:23:14.799 "enable_quickack": false, 00:23:14.799 "enable_placement_id": 0, 00:23:14.799 "enable_zerocopy_send_server": true, 00:23:14.799 "enable_zerocopy_send_client": false, 00:23:14.799 "zerocopy_threshold": 0, 00:23:14.799 "tls_version": 0, 00:23:14.799 "enable_ktls": false 00:23:14.799 } 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "method": "sock_impl_set_options", 00:23:14.799 "params": { 00:23:14.799 "impl_name": "posix", 00:23:14.799 "recv_buf_size": 2097152, 00:23:14.799 "send_buf_size": 2097152, 00:23:14.799 "enable_recv_pipe": true, 00:23:14.799 "enable_quickack": false, 00:23:14.799 "enable_placement_id": 0, 00:23:14.799 "enable_zerocopy_send_server": true, 00:23:14.799 "enable_zerocopy_send_client": false, 00:23:14.799 "zerocopy_threshold": 0, 00:23:14.799 "tls_version": 0, 00:23:14.799 "enable_ktls": false 00:23:14.799 } 00:23:14.799 } 00:23:14.799 ] 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "subsystem": "vmd", 00:23:14.799 "config": [] 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "subsystem": "accel", 00:23:14.799 "config": [ 00:23:14.799 { 00:23:14.799 "method": "accel_set_options", 00:23:14.799 "params": { 00:23:14.799 "small_cache_size": 128, 00:23:14.799 "large_cache_size": 16, 00:23:14.799 "task_count": 2048, 00:23:14.799 "sequence_count": 2048, 00:23:14.799 "buf_count": 2048 00:23:14.799 } 00:23:14.799 } 00:23:14.799 ] 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "subsystem": "bdev", 00:23:14.799 "config": [ 00:23:14.799 { 00:23:14.799 "method": "bdev_set_options", 00:23:14.799 "params": { 00:23:14.799 "bdev_io_pool_size": 65535, 00:23:14.799 "bdev_io_cache_size": 256, 00:23:14.799 "bdev_auto_examine": true, 00:23:14.799 "iobuf_small_cache_size": 128, 00:23:14.799 "iobuf_large_cache_size": 16 00:23:14.799 } 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "method": "bdev_raid_set_options", 00:23:14.799 "params": { 00:23:14.799 "process_window_size_kb": 1024 00:23:14.799 } 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "method": "bdev_iscsi_set_options", 00:23:14.799 "params": { 00:23:14.799 "timeout_sec": 30 00:23:14.799 } 00:23:14.799 }, 00:23:14.799 { 00:23:14.799 "method": "bdev_nvme_set_options", 00:23:14.799 "params": { 00:23:14.799 "action_on_timeout": "none", 00:23:14.799 "timeout_us": 0, 00:23:14.799 "timeout_admin_us": 0, 00:23:14.799 "keep_alive_timeout_ms": 10000, 00:23:14.799 "arbitration_burst": 0, 00:23:14.799 "low_priority_weight": 0, 00:23:14.799 "medium_priority_weight": 0, 00:23:14.800 "high_priority_weight": 0, 00:23:14.800 "nvme_adminq_poll_period_us": 10000, 00:23:14.800 "nvme_ioq_poll_period_us": 0, 00:23:14.800 "io_queue_requests": 512, 00:23:14.800 "delay_cmd_submit": true, 00:23:14.800 "transport_retry_count": 4, 00:23:14.800 "bdev_retry_count": 3, 00:23:14.800 "transport_ack_timeout": 0, 00:23:14.800 "ctrlr_loss_timeout_sec": 0, 00:23:14.800 "reconnect_delay_sec": 0, 00:23:14.800 "fast_io_fail_timeout_sec": 0, 00:23:14.800 "disable_auto_failback": false, 00:23:14.800 "generate_uuids": false, 00:23:14.800 "transport_tos": 0, 00:23:14.800 "nvme_error_stat": false, 00:23:14.800 "rdma_srq_size": 0, 00:23:14.800 "io_path_stat": false, 00:23:14.800 "allow_accel_sequence": false, 00:23:14.800 "rdma_max_cq_size": 0, 00:23:14.800 "rdma_cm_event_timeout_ms": 0, 00:23:14.800 "dhchap_digests": [ 00:23:14.800 "sha256", 00:23:14.800 "sha384", 00:23:14.800 "sha512" 00:23:14.800 ], 00:23:14.800 "dhchap_dhgroups": [ 00:23:14.800 "null", 00:23:14.800 "ffdhe2048", 00:23:14.800 "ffdhe3072", 00:23:14.800 "ffdhe4096", 00:23:14.800 "ffdWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.800 he6144", 00:23:14.800 "ffdhe8192" 00:23:14.800 ] 00:23:14.800 } 00:23:14.800 }, 00:23:14.800 { 00:23:14.800 "method": "bdev_nvme_attach_controller", 00:23:14.800 "params": { 00:23:14.800 "name": "TLSTEST", 00:23:14.800 "trtype": "TCP", 00:23:14.800 "adrfam": "IPv4", 00:23:14.800 "traddr": "10.0.0.2", 00:23:14.800 "trsvcid": "4420", 00:23:14.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.800 "prchk_reftag": false, 00:23:14.800 "prchk_guard": false, 00:23:14.800 "ctrlr_loss_timeout_sec": 0, 00:23:14.800 "reconnect_delay_sec": 0, 00:23:14.800 "fast_io_fail_timeout_sec": 0, 00:23:14.800 "psk": "/tmp/tmp.QrbfHRdWSl", 00:23:14.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.800 "hdgst": false, 00:23:14.800 "ddgst": false 00:23:14.800 } 00:23:14.800 }, 00:23:14.800 { 00:23:14.800 "method": "bdev_nvme_set_hotplug", 00:23:14.800 "params": { 00:23:14.800 "period_us": 100000, 00:23:14.800 "enable": false 00:23:14.800 } 00:23:14.800 }, 00:23:14.800 { 00:23:14.800 "method": "bdev_wait_for_examine" 00:23:14.800 } 00:23:14.800 ] 00:23:14.800 }, 00:23:14.800 { 00:23:14.800 "subsystem": "nbd", 00:23:14.800 "config": [] 00:23:14.800 } 00:23:14.800 ] 00:23:14.800 }' 00:23:14.800 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:14.800 18:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.800 [2024-07-25 18:56:26.639137] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:14.800 [2024-07-25 18:56:26.639216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576040 ] 00:23:14.800 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.059 [2024-07-25 18:56:26.700103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.059 [2024-07-25 18:56:26.785195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.320 [2024-07-25 18:56:26.953266] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.320 [2024-07-25 18:56:26.953433] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:15.887 18:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:15.887 18:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:15.887 18:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:15.887 Running I/O for 10 seconds... 00:23:28.113 00:23:28.114 Latency(us) 00:23:28.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.114 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.114 Verification LBA range: start 0x0 length 0x2000 00:23:28.114 TLSTESTn1 : 10.04 2622.01 10.24 0.00 0.00 48695.32 7233.23 48545.19 00:23:28.114 =================================================================================================================== 00:23:28.114 Total : 2622.01 10.24 0.00 0.00 48695.32 7233.23 48545.19 00:23:28.114 0 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3576040 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3576040 ']' 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3576040 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3576040 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3576040' 00:23:28.114 killing process with pid 3576040 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3576040 00:23:28.114 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.114 00:23:28.114 Latency(us) 00:23:28.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.114 =================================================================================================================== 00:23:28.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.114 [2024-07-25 18:56:37.852922] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.114 18:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3576040 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3575891 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3575891 ']' 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3575891 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3575891 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3575891' 00:23:28.114 killing process with pid 3575891 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3575891 00:23:28.114 [2024-07-25 18:56:38.099279] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3575891 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3577365 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3577365 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3577365 ']' 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.114 [2024-07-25 18:56:38.394643] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:28.114 [2024-07-25 18:56:38.394724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.114 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.114 [2024-07-25 18:56:38.461917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.114 [2024-07-25 18:56:38.549151] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.114 [2024-07-25 18:56:38.549204] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.114 [2024-07-25 18:56:38.549233] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.114 [2024-07-25 18:56:38.549244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.114 [2024-07-25 18:56:38.549254] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.114 [2024-07-25 18:56:38.549281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.QrbfHRdWSl 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QrbfHRdWSl 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:28.114 [2024-07-25 18:56:38.968864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.114 18:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:28.114 18:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:28.114 [2024-07-25 18:56:39.506326] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.114 [2024-07-25 18:56:39.506603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.114 18:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:28.114 malloc0 00:23:28.114 18:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:28.373 18:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QrbfHRdWSl 00:23:28.631 [2024-07-25 18:56:40.356095] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3577661 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3577661 /var/tmp/bdevperf.sock 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3577661 ']' 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.631 18:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.631 [2024-07-25 18:56:40.420983] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:28.631 [2024-07-25 18:56:40.421074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577661 ] 00:23:28.631 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.631 [2024-07-25 18:56:40.481768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.889 [2024-07-25 18:56:40.570414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.889 18:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:28.889 18:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:28.889 18:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QrbfHRdWSl 00:23:29.148 18:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:29.406 [2024-07-25 18:56:41.186259] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.406 nvme0n1 00:23:29.406 18:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.663 Running I/O for 1 seconds... 00:23:30.604 00:23:30.604 Latency(us) 00:23:30.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.604 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.604 Verification LBA range: start 0x0 length 0x2000 00:23:30.604 nvme0n1 : 1.02 3201.47 12.51 0.00 0.00 39582.03 6990.51 32039.82 00:23:30.604 =================================================================================================================== 00:23:30.604 Total : 3201.47 12.51 0.00 0.00 39582.03 6990.51 32039.82 00:23:30.604 0 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3577661 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3577661 ']' 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3577661 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577661 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577661' 00:23:30.604 killing process with pid 3577661 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3577661 00:23:30.604 Received shutdown signal, test time was about 1.000000 seconds 00:23:30.604 00:23:30.604 Latency(us) 00:23:30.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.604 =================================================================================================================== 00:23:30.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.604 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3577661 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3577365 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3577365 ']' 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3577365 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577365 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577365' 00:23:30.862 killing process with pid 3577365 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3577365 00:23:30.862 [2024-07-25 18:56:42.695860] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:30.862 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3577365 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3577951 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3577951 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3577951 ']' 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:31.121 18:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.121 [2024-07-25 18:56:42.979763] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:31.121 [2024-07-25 18:56:42.979852] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.380 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.380 [2024-07-25 18:56:43.043230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.380 [2024-07-25 18:56:43.125485] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.380 [2024-07-25 18:56:43.125538] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.380 [2024-07-25 18:56:43.125558] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.380 [2024-07-25 18:56:43.125569] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.380 [2024-07-25 18:56:43.125579] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.380 [2024-07-25 18:56:43.125604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.380 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.380 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:31.380 18:56:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.380 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.380 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.639 [2024-07-25 18:56:43.266706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.639 malloc0 00:23:31.639 [2024-07-25 18:56:43.299085] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.639 [2024-07-25 18:56:43.299372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3577971 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3577971 /var/tmp/bdevperf.sock 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3577971 ']' 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:31.639 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.639 [2024-07-25 18:56:43.370568] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:31.639 [2024-07-25 18:56:43.370651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577971 ] 00:23:31.639 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.639 [2024-07-25 18:56:43.428308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.639 [2024-07-25 18:56:43.515201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.898 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.898 18:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:31.898 18:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QrbfHRdWSl 00:23:32.156 18:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:32.414 [2024-07-25 18:56:44.138139] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.414 nvme0n1 00:23:32.414 18:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:32.674 Running I/O for 1 seconds... 00:23:33.663 00:23:33.663 Latency(us) 00:23:33.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.663 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:33.663 Verification LBA range: start 0x0 length 0x2000 00:23:33.663 nvme0n1 : 1.02 3517.71 13.74 0.00 0.00 36004.74 7573.05 43108.12 00:23:33.663 =================================================================================================================== 00:23:33.663 Total : 3517.71 13.74 0.00 0.00 36004.74 7573.05 43108.12 00:23:33.663 0 00:23:33.663 18:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:33.663 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.663 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.663 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.663 18:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:33.663 "subsystems": [ 00:23:33.663 { 00:23:33.663 "subsystem": "keyring", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "keyring_file_add_key", 00:23:33.663 "params": { 00:23:33.663 "name": "key0", 00:23:33.663 "path": "/tmp/tmp.QrbfHRdWSl" 00:23:33.663 } 00:23:33.663 } 00:23:33.663 ] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "iobuf", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "iobuf_set_options", 00:23:33.663 "params": { 00:23:33.663 "small_pool_count": 8192, 00:23:33.663 "large_pool_count": 1024, 00:23:33.663 "small_bufsize": 8192, 00:23:33.663 "large_bufsize": 135168 00:23:33.663 } 00:23:33.663 } 00:23:33.663 ] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "sock", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "sock_set_default_impl", 00:23:33.663 "params": { 00:23:33.663 "impl_name": "posix" 00:23:33.663 } 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "method": "sock_impl_set_options", 00:23:33.663 "params": { 00:23:33.663 "impl_name": "ssl", 00:23:33.663 "recv_buf_size": 4096, 00:23:33.663 "send_buf_size": 4096, 00:23:33.663 "enable_recv_pipe": true, 00:23:33.663 "enable_quickack": false, 00:23:33.663 "enable_placement_id": 0, 00:23:33.663 "enable_zerocopy_send_server": true, 00:23:33.663 "enable_zerocopy_send_client": false, 00:23:33.663 "zerocopy_threshold": 0, 00:23:33.663 "tls_version": 0, 00:23:33.663 "enable_ktls": false 00:23:33.663 } 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "method": "sock_impl_set_options", 00:23:33.663 "params": { 00:23:33.663 "impl_name": "posix", 00:23:33.663 "recv_buf_size": 2097152, 00:23:33.663 "send_buf_size": 2097152, 00:23:33.663 "enable_recv_pipe": true, 00:23:33.663 "enable_quickack": false, 00:23:33.663 "enable_placement_id": 0, 00:23:33.663 "enable_zerocopy_send_server": true, 00:23:33.663 "enable_zerocopy_send_client": false, 00:23:33.663 "zerocopy_threshold": 0, 00:23:33.663 "tls_version": 0, 00:23:33.663 "enable_ktls": false 00:23:33.663 } 00:23:33.663 } 00:23:33.663 ] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "vmd", 00:23:33.663 "config": [] 00:23:33.663 }, 00:23:33.663 { 00:23:33.663 "subsystem": "accel", 00:23:33.663 "config": [ 00:23:33.663 { 00:23:33.663 "method": "accel_set_options", 00:23:33.664 "params": { 00:23:33.664 "small_cache_size": 128, 00:23:33.664 "large_cache_size": 16, 00:23:33.664 "task_count": 2048, 00:23:33.664 "sequence_count": 2048, 00:23:33.664 "buf_count": 2048 00:23:33.664 } 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "subsystem": "bdev", 00:23:33.664 "config": [ 00:23:33.664 { 00:23:33.664 "method": "bdev_set_options", 00:23:33.664 "params": { 00:23:33.664 "bdev_io_pool_size": 65535, 00:23:33.664 "bdev_io_cache_size": 256, 00:23:33.664 "bdev_auto_examine": true, 00:23:33.664 "iobuf_small_cache_size": 128, 00:23:33.664 "iobuf_large_cache_size": 16 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_raid_set_options", 00:23:33.664 "params": { 00:23:33.664 "process_window_size_kb": 1024 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_iscsi_set_options", 00:23:33.664 "params": { 00:23:33.664 "timeout_sec": 30 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_nvme_set_options", 00:23:33.664 "params": { 00:23:33.664 "action_on_timeout": "none", 00:23:33.664 "timeout_us": 0, 00:23:33.664 "timeout_admin_us": 0, 00:23:33.664 "keep_alive_timeout_ms": 10000, 00:23:33.664 "arbitration_burst": 0, 00:23:33.664 "low_priority_weight": 0, 00:23:33.664 "medium_priority_weight": 0, 00:23:33.664 "high_priority_weight": 0, 00:23:33.664 "nvme_adminq_poll_period_us": 10000, 00:23:33.664 "nvme_ioq_poll_period_us": 0, 00:23:33.664 "io_queue_requests": 0, 00:23:33.664 "delay_cmd_submit": true, 00:23:33.664 "transport_retry_count": 4, 00:23:33.664 "bdev_retry_count": 3, 00:23:33.664 "transport_ack_timeout": 0, 00:23:33.664 "ctrlr_loss_timeout_sec": 0, 00:23:33.664 "reconnect_delay_sec": 0, 00:23:33.664 "fast_io_fail_timeout_sec": 0, 00:23:33.664 "disable_auto_failback": false, 00:23:33.664 "generate_uuids": false, 00:23:33.664 "transport_tos": 0, 00:23:33.664 "nvme_error_stat": false, 00:23:33.664 "rdma_srq_size": 0, 00:23:33.664 "io_path_stat": false, 00:23:33.664 "allow_accel_sequence": false, 00:23:33.664 "rdma_max_cq_size": 0, 00:23:33.664 "rdma_cm_event_timeout_ms": 0, 00:23:33.664 "dhchap_digests": [ 00:23:33.664 "sha256", 00:23:33.664 "sha384", 00:23:33.664 "sha512" 00:23:33.664 ], 00:23:33.664 "dhchap_dhgroups": [ 00:23:33.664 "null", 00:23:33.664 "ffdhe2048", 00:23:33.664 "ffdhe3072", 00:23:33.664 "ffdhe4096", 00:23:33.664 "ffdhe6144", 00:23:33.664 "ffdhe8192" 00:23:33.664 ] 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_nvme_set_hotplug", 00:23:33.664 "params": { 00:23:33.664 "period_us": 100000, 00:23:33.664 "enable": false 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_malloc_create", 00:23:33.664 "params": { 00:23:33.664 "name": "malloc0", 00:23:33.664 "num_blocks": 8192, 00:23:33.664 "block_size": 4096, 00:23:33.664 "physical_block_size": 4096, 00:23:33.664 "uuid": "b2935f58-0911-4199-ba74-312f939ba7b3", 00:23:33.664 "optimal_io_boundary": 0 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "bdev_wait_for_examine" 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "subsystem": "nbd", 00:23:33.664 "config": [] 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "subsystem": "scheduler", 00:23:33.664 "config": [ 00:23:33.664 { 00:23:33.664 "method": "framework_set_scheduler", 00:23:33.664 "params": { 00:23:33.664 "name": "static" 00:23:33.664 } 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "subsystem": "nvmf", 00:23:33.664 "config": [ 00:23:33.664 { 00:23:33.664 "method": "nvmf_set_config", 00:23:33.664 "params": { 00:23:33.664 "discovery_filter": "match_any", 00:23:33.664 "admin_cmd_passthru": { 00:23:33.664 "identify_ctrlr": false 00:23:33.664 } 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "nvmf_set_max_subsystems", 00:23:33.664 "params": { 00:23:33.664 "max_subsystems": 1024 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "nvmf_set_crdt", 00:23:33.664 "params": { 00:23:33.664 "crdt1": 0, 00:23:33.664 "crdt2": 0, 00:23:33.664 "crdt3": 0 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "nvmf_create_transport", 00:23:33.664 "params": { 00:23:33.664 "trtype": "TCP", 00:23:33.664 "max_queue_depth": 128, 00:23:33.664 "max_io_qpairs_per_ctrlr": 127, 00:23:33.664 "in_capsule_data_size": 4096, 00:23:33.664 "max_io_size": 131072, 00:23:33.664 "io_unit_size": 131072, 00:23:33.664 "max_aq_depth": 128, 00:23:33.664 "num_shared_buffers": 511, 00:23:33.664 "buf_cache_size": 4294967295, 00:23:33.664 "dif_insert_or_strip": false, 00:23:33.664 "zcopy": false, 00:23:33.664 "c2h_success": false, 00:23:33.664 "sock_priority": 0, 00:23:33.664 "abort_timeout_sec": 1, 00:23:33.664 "ack_timeout": 0, 00:23:33.664 "data_wr_pool_size": 0 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "nvmf_create_subsystem", 00:23:33.664 "params": { 00:23:33.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.664 "allow_any_host": false, 00:23:33.664 "serial_number": "00000000000000000000", 00:23:33.664 "model_number": "SPDK bdev Controller", 00:23:33.664 "max_namespaces": 32, 00:23:33.664 "min_cntlid": 1, 00:23:33.664 "max_cntlid": 65519, 00:23:33.664 "ana_reporting": false 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "nvmf_subsystem_add_host", 00:23:33.664 "params": { 00:23:33.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.664 "host": "nqn.2016-06.io.spdk:host1", 00:23:33.664 "psk": "key0" 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "nvmf_subsystem_add_ns", 00:23:33.664 "params": { 00:23:33.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.664 "namespace": { 00:23:33.664 "nsid": 1, 00:23:33.664 "bdev_name": "malloc0", 00:23:33.664 "nguid": "B2935F5809114199BA74312F939BA7B3", 00:23:33.664 "uuid": "b2935f58-0911-4199-ba74-312f939ba7b3", 00:23:33.664 "no_auto_visible": false 00:23:33.664 } 00:23:33.664 } 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "method": "nvmf_subsystem_add_listener", 00:23:33.664 "params": { 00:23:33.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.664 "listen_address": { 00:23:33.664 "trtype": "TCP", 00:23:33.664 "adrfam": "IPv4", 00:23:33.664 "traddr": "10.0.0.2", 00:23:33.664 "trsvcid": "4420" 00:23:33.664 }, 00:23:33.664 "secure_channel": true 00:23:33.664 } 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 }' 00:23:33.664 18:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:34.234 18:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:34.234 "subsystems": [ 00:23:34.234 { 00:23:34.234 "subsystem": "keyring", 00:23:34.234 "config": [ 00:23:34.234 { 00:23:34.234 "method": "keyring_file_add_key", 00:23:34.234 "params": { 00:23:34.234 "name": "key0", 00:23:34.234 "path": "/tmp/tmp.QrbfHRdWSl" 00:23:34.234 } 00:23:34.234 } 00:23:34.234 ] 00:23:34.234 }, 00:23:34.234 { 00:23:34.234 "subsystem": "iobuf", 00:23:34.234 "config": [ 00:23:34.234 { 00:23:34.234 "method": "iobuf_set_options", 00:23:34.234 "params": { 00:23:34.234 "small_pool_count": 8192, 00:23:34.234 "large_pool_count": 1024, 00:23:34.234 "small_bufsize": 8192, 00:23:34.234 "large_bufsize": 135168 00:23:34.234 } 00:23:34.234 } 00:23:34.234 ] 00:23:34.234 }, 00:23:34.234 { 00:23:34.234 "subsystem": "sock", 00:23:34.234 "config": [ 00:23:34.234 { 00:23:34.234 "method": "sock_set_default_impl", 00:23:34.234 "params": { 00:23:34.234 "impl_name": "posix" 00:23:34.234 } 00:23:34.234 }, 00:23:34.234 { 00:23:34.234 "method": "sock_impl_set_options", 00:23:34.234 "params": { 00:23:34.234 "impl_name": "ssl", 00:23:34.234 "recv_buf_size": 4096, 00:23:34.234 "send_buf_size": 4096, 00:23:34.234 "enable_recv_pipe": true, 00:23:34.234 "enable_quickack": false, 00:23:34.234 "enable_placement_id": 0, 00:23:34.234 "enable_zerocopy_send_server": true, 00:23:34.234 "enable_zerocopy_send_client": false, 00:23:34.234 "zerocopy_threshold": 0, 00:23:34.234 "tls_version": 0, 00:23:34.234 "enable_ktls": false 00:23:34.234 } 00:23:34.234 }, 00:23:34.234 { 00:23:34.234 "method": "sock_impl_set_options", 00:23:34.234 "params": { 00:23:34.234 "impl_name": "posix", 00:23:34.234 "recv_buf_size": 2097152, 00:23:34.234 "send_buf_size": 2097152, 00:23:34.234 "enable_recv_pipe": true, 00:23:34.234 "enable_quickack": false, 00:23:34.234 "enable_placement_id": 0, 00:23:34.234 "enable_zerocopy_send_server": true, 00:23:34.234 "enable_zerocopy_send_client": false, 00:23:34.234 "zerocopy_threshold": 0, 00:23:34.234 "tls_version": 0, 00:23:34.234 "enable_ktls": false 00:23:34.234 } 00:23:34.234 } 00:23:34.234 ] 00:23:34.234 }, 00:23:34.234 { 00:23:34.234 "subsystem": "vmd", 00:23:34.234 "config": [] 00:23:34.234 }, 00:23:34.234 { 00:23:34.234 "subsystem": "accel", 00:23:34.234 "config": [ 00:23:34.234 { 00:23:34.234 "method": "accel_set_options", 00:23:34.234 "params": { 00:23:34.234 "small_cache_size": 128, 00:23:34.234 "large_cache_size": 16, 00:23:34.234 "task_count": 2048, 00:23:34.234 "sequence_count": 2048, 00:23:34.234 "buf_count": 2048 00:23:34.234 } 00:23:34.234 } 00:23:34.234 ] 00:23:34.234 }, 00:23:34.234 { 00:23:34.234 "subsystem": "bdev", 00:23:34.234 "config": [ 00:23:34.234 { 00:23:34.234 "method": "bdev_set_options", 00:23:34.234 "params": { 00:23:34.234 "bdev_io_pool_size": 65535, 00:23:34.234 "bdev_io_cache_size": 256, 00:23:34.234 "bdev_auto_examine": true, 00:23:34.234 "iobuf_small_cache_size": 128, 00:23:34.234 "iobuf_large_cache_size": 16 00:23:34.234 } 00:23:34.235 }, 00:23:34.235 { 00:23:34.235 "method": "bdev_raid_set_options", 00:23:34.235 "params": { 00:23:34.235 "process_window_size_kb": 1024 00:23:34.235 } 00:23:34.235 }, 00:23:34.235 { 00:23:34.235 "method": "bdev_iscsi_set_options", 00:23:34.235 "params": { 00:23:34.235 "timeout_sec": 30 00:23:34.235 } 00:23:34.235 }, 00:23:34.235 { 00:23:34.235 "method": "bdev_nvme_set_options", 00:23:34.235 "params": { 00:23:34.235 "action_on_timeout": "none", 00:23:34.235 "timeout_us": 0, 00:23:34.235 "timeout_admin_us": 0, 00:23:34.235 "keep_alive_timeout_ms": 10000, 00:23:34.235 "arbitration_burst": 0, 00:23:34.235 "low_priority_weight": 0, 00:23:34.235 "medium_priority_weight": 0, 00:23:34.235 "high_priority_weight": 0, 00:23:34.235 "nvme_adminq_poll_period_us": 10000, 00:23:34.235 "nvme_ioq_poll_period_us": 0, 00:23:34.235 "io_queue_requests": 512, 00:23:34.235 "delay_cmd_submit": true, 00:23:34.235 "transport_retry_count": 4, 00:23:34.235 "bdev_retry_count": 3, 00:23:34.235 "transport_ack_timeout": 0, 00:23:34.235 "ctrlr_loss_timeout_sec": 0, 00:23:34.235 "reconnect_delay_sec": 0, 00:23:34.235 "fast_io_fail_timeout_sec": 0, 00:23:34.235 "disable_auto_failback": false, 00:23:34.235 "generate_uuids": false, 00:23:34.235 "transport_tos": 0, 00:23:34.235 "nvme_error_stat": false, 00:23:34.235 "rdma_srq_size": 0, 00:23:34.235 "io_path_stat": false, 00:23:34.235 "allow_accel_sequence": false, 00:23:34.235 "rdma_max_cq_size": 0, 00:23:34.235 "rdma_cm_event_timeout_ms": 0, 00:23:34.235 "dhchap_digests": [ 00:23:34.235 "sha256", 00:23:34.235 "sha384", 00:23:34.235 "sha512" 00:23:34.235 ], 00:23:34.235 "dhchap_dhgroups": [ 00:23:34.235 "null", 00:23:34.235 "ffdhe2048", 00:23:34.235 "ffdhe3072", 00:23:34.235 "ffdhe4096", 00:23:34.235 "ffdhe6144", 00:23:34.235 "ffdhe8192" 00:23:34.235 ] 00:23:34.235 } 00:23:34.235 }, 00:23:34.235 { 00:23:34.235 "method": "bdev_nvme_attach_controller", 00:23:34.235 "params": { 00:23:34.235 "name": "nvme0", 00:23:34.235 "trtype": "TCP", 00:23:34.235 "adrfam": "IPv4", 00:23:34.235 "traddr": "10.0.0.2", 00:23:34.235 "trsvcid": "4420", 00:23:34.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.235 "prchk_reftag": false, 00:23:34.235 "prchk_guard": false, 00:23:34.235 "ctrlr_loss_timeout_sec": 0, 00:23:34.235 "reconnect_delay_sec": 0, 00:23:34.235 "fast_io_fail_timeout_sec": 0, 00:23:34.235 "psk": "key0", 00:23:34.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.235 "hdgst": false, 00:23:34.235 "ddgst": false 00:23:34.235 } 00:23:34.235 }, 00:23:34.235 { 00:23:34.235 "method": "bdev_nvme_set_hotplug", 00:23:34.235 "params": { 00:23:34.235 "period_us": 100000, 00:23:34.235 "enable": false 00:23:34.235 } 00:23:34.235 }, 00:23:34.235 { 00:23:34.235 "method": "bdev_enable_histogram", 00:23:34.235 "params": { 00:23:34.235 "name": "nvme0n1", 00:23:34.235 "enable": true 00:23:34.235 } 00:23:34.235 }, 00:23:34.235 { 00:23:34.235 "method": "bdev_wait_for_examine" 00:23:34.235 } 00:23:34.235 ] 00:23:34.235 }, 00:23:34.235 { 00:23:34.235 "subsystem": "nbd", 00:23:34.235 "config": [] 00:23:34.235 } 00:23:34.235 ] 00:23:34.235 }' 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3577971 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3577971 ']' 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3577971 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577971 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577971' 00:23:34.235 killing process with pid 3577971 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3577971 00:23:34.235 Received shutdown signal, test time was about 1.000000 seconds 00:23:34.235 00:23:34.235 Latency(us) 00:23:34.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.235 =================================================================================================================== 00:23:34.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.235 18:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3577971 00:23:34.235 18:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3577951 00:23:34.235 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3577951 ']' 00:23:34.235 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3577951 00:23:34.235 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:34.235 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.235 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577951 00:23:34.494 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:34.494 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:34.494 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577951' 00:23:34.494 killing process with pid 3577951 00:23:34.494 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3577951 00:23:34.494 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3577951 00:23:34.753 18:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:34.753 18:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:34.753 "subsystems": [ 00:23:34.753 { 00:23:34.753 "subsystem": "keyring", 00:23:34.753 "config": [ 00:23:34.753 { 00:23:34.753 "method": "keyring_file_add_key", 00:23:34.753 "params": { 00:23:34.753 "name": "key0", 00:23:34.753 "path": "/tmp/tmp.QrbfHRdWSl" 00:23:34.753 } 00:23:34.753 } 00:23:34.753 ] 00:23:34.753 }, 00:23:34.753 { 00:23:34.753 "subsystem": "iobuf", 00:23:34.753 "config": [ 00:23:34.753 { 00:23:34.753 "method": "iobuf_set_options", 00:23:34.753 "params": { 00:23:34.753 "small_pool_count": 8192, 00:23:34.753 "large_pool_count": 1024, 00:23:34.753 "small_bufsize": 8192, 00:23:34.753 "large_bufsize": 135168 00:23:34.753 } 00:23:34.753 } 00:23:34.753 ] 00:23:34.753 }, 00:23:34.753 { 00:23:34.753 "subsystem": "sock", 00:23:34.753 "config": [ 00:23:34.753 { 00:23:34.753 "method": "sock_set_default_impl", 00:23:34.753 "params": { 00:23:34.753 "impl_name": "posix" 00:23:34.753 } 00:23:34.753 }, 00:23:34.753 { 00:23:34.753 "method": "sock_impl_set_options", 00:23:34.753 "params": { 00:23:34.753 "impl_name": "ssl", 00:23:34.753 "recv_buf_size": 4096, 00:23:34.753 "send_buf_size": 4096, 00:23:34.753 "enable_recv_pipe": true, 00:23:34.753 "enable_quickack": false, 00:23:34.753 "enable_placement_id": 0, 00:23:34.753 "enable_zerocopy_send_server": true, 00:23:34.753 "enable_zerocopy_send_client": false, 00:23:34.753 "zerocopy_threshold": 0, 00:23:34.753 "tls_version": 0, 00:23:34.753 "enable_ktls": false 00:23:34.753 } 00:23:34.753 }, 00:23:34.753 { 00:23:34.753 "method": "sock_impl_set_options", 00:23:34.753 "params": { 00:23:34.753 "impl_name": "posix", 00:23:34.753 "recv_buf_size": 2097152, 00:23:34.753 "send_buf_size": 2097152, 00:23:34.753 "enable_recv_pipe": true, 00:23:34.753 "enable_quickack": false, 00:23:34.753 "enable_placement_id": 0, 00:23:34.753 "enable_zerocopy_send_server": true, 00:23:34.753 "enable_zerocopy_send_client": false, 00:23:34.753 "zerocopy_threshold": 0, 00:23:34.753 "tls_version": 0, 00:23:34.753 "enable_ktls": false 00:23:34.753 } 00:23:34.753 } 00:23:34.753 ] 00:23:34.753 }, 00:23:34.753 { 00:23:34.753 "subsystem": "vmd", 00:23:34.753 "config": [] 00:23:34.753 }, 00:23:34.753 { 00:23:34.753 "subsystem": "accel", 00:23:34.753 "config": [ 00:23:34.753 { 00:23:34.753 "method": "accel_set_options", 00:23:34.753 "params": { 00:23:34.753 "small_cache_size": 128, 00:23:34.753 "large_cache_size": 16, 00:23:34.753 "task_count": 2048, 00:23:34.753 "sequence_count": 2048, 00:23:34.753 "buf_count": 2048 00:23:34.753 } 00:23:34.753 } 00:23:34.753 ] 00:23:34.753 }, 00:23:34.753 { 00:23:34.753 "subsystem": "bdev", 00:23:34.753 "config": [ 00:23:34.753 { 00:23:34.753 "method": "bdev_set_options", 00:23:34.753 "params": { 00:23:34.753 "bdev_io_pool_size": 65535, 00:23:34.753 "bdev_io_cache_size": 256, 00:23:34.753 "bdev_auto_examine": true, 00:23:34.753 "iobuf_small_cache_size": 128, 00:23:34.753 "iobuf_large_cache_size": 16 00:23:34.753 } 00:23:34.753 }, 00:23:34.753 { 00:23:34.753 "method": "bdev_raid_set_options", 00:23:34.753 "params": { 00:23:34.753 "process_window_size_kb": 1024 00:23:34.753 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "bdev_iscsi_set_options", 00:23:34.754 "params": { 00:23:34.754 "timeout_sec": 30 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "bdev_nvme_set_options", 00:23:34.754 "params": { 00:23:34.754 "action_on_timeout": "none", 00:23:34.754 "timeout_us": 0, 00:23:34.754 "timeout_admin_us": 0, 00:23:34.754 "keep_alive_timeout_ms": 10000, 00:23:34.754 "arbitration_burst": 0, 00:23:34.754 "low_priority_weight": 0, 00:23:34.754 "medium_priority_weight": 0, 00:23:34.754 "high_priority_weight": 0, 00:23:34.754 "nvme_adminq_poll_period_us": 10000, 00:23:34.754 "nvme_ioq_poll_period_us": 0, 00:23:34.754 "io_queue_requests": 0, 00:23:34.754 "delay_cmd_submit": true, 00:23:34.754 "transport_retry_count": 4, 00:23:34.754 "bdev_retry_count": 3, 00:23:34.754 "transport_ack_timeout": 0, 00:23:34.754 "ctrlr_loss_timeout_sec": 0, 00:23:34.754 "reconnect_delay_sec": 0, 00:23:34.754 "fast_io_fail_timeout_sec": 0, 00:23:34.754 "disable_auto_failback": false, 00:23:34.754 "generate_uuids": false, 00:23:34.754 "transport_tos": 0, 00:23:34.754 "nvme_error_stat": false, 00:23:34.754 "rdma_srq_size": 0, 00:23:34.754 "io_path_stat": false, 00:23:34.754 "allow_accel_sequence": false, 00:23:34.754 "rdma_max_cq_size": 0, 00:23:34.754 "rdma_cm_event_timeout_ms": 0, 00:23:34.754 "dhchap_digests": [ 00:23:34.754 "sha256", 00:23:34.754 "sha384", 00:23:34.754 "sha512" 00:23:34.754 ], 00:23:34.754 "dhchap_dhgroups": [ 00:23:34.754 "null", 00:23:34.754 "ffdhe2048", 00:23:34.754 "ffdhe3072", 00:23:34.754 "ffdhe4096", 00:23:34.754 "ffdhe6144", 00:23:34.754 "ffdhe8192" 00:23:34.754 ] 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "bdev_nvme_set_hotplug", 00:23:34.754 "params": { 00:23:34.754 "period_us": 100000, 00:23:34.754 "enable": false 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "bdev_malloc_create", 00:23:34.754 "params": { 00:23:34.754 "name": "malloc0", 00:23:34.754 "num_blocks": 8192, 00:23:34.754 "block_size": 4096, 00:23:34.754 "physical_block_size": 4096, 00:23:34.754 "uuid": "b2935f58-0911-4199-ba74-312f939ba7b3", 00:23:34.754 "optimal_io_boundary": 0 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "bdev_wait_for_examine" 00:23:34.754 } 00:23:34.754 ] 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "subsystem": "nbd", 00:23:34.754 "config": [] 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "subsystem": "scheduler", 00:23:34.754 "config": [ 00:23:34.754 { 00:23:34.754 "method": "framework_set_scheduler", 00:23:34.754 "params": { 00:23:34.754 "name": "static" 00:23:34.754 } 00:23:34.754 } 00:23:34.754 ] 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "subsystem": "nvmf", 00:23:34.754 "config": [ 00:23:34.754 { 00:23:34.754 "method": "nvmf_set_config", 00:23:34.754 "params": { 00:23:34.754 "discovery_filter": "match_any", 00:23:34.754 "admin_cmd_passthru": { 00:23:34.754 "identify_ctrlr": false 00:23:34.754 } 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "nvmf_set_max_subsystems", 00:23:34.754 "params": { 00:23:34.754 "max_subsystems": 1024 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "nvmf_set_crdt", 00:23:34.754 "params": { 00:23:34.754 "crdt1": 0, 00:23:34.754 "crdt2": 0, 00:23:34.754 "crdt3": 0 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "nvmf_create_transport", 00:23:34.754 "params": { 00:23:34.754 "trtype": "TCP", 00:23:34.754 "max_queue_depth": 128, 00:23:34.754 "max_io_qpairs_per_ctrlr": 127, 00:23:34.754 "in_capsule_data_size": 4096, 00:23:34.754 "max_io_size": 131072, 00:23:34.754 "io_unit_size": 131072, 00:23:34.754 "max_aq_depth": 128, 00:23:34.754 "num_shared_buffers": 511, 00:23:34.754 "buf_cache_size": 4294967295, 00:23:34.754 "dif_insert_or_strip": false, 00:23:34.754 "zcopy": false, 00:23:34.754 "c2h_success": false, 00:23:34.754 "sock_priority": 0, 00:23:34.754 "abort_timeout_sec": 1, 00:23:34.754 "ack_timeout": 0, 00:23:34.754 "data_wr_pool_size": 0 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "nvmf_create_subsystem", 00:23:34.754 "params": { 00:23:34.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.754 "allow_any_host": false, 00:23:34.754 "serial_number": "00000000000000000000", 00:23:34.754 "model_number": "SPDK bdev Controller", 00:23:34.754 "max_namespaces": 32, 00:23:34.754 "min_cntlid": 1, 00:23:34.754 "max_cntlid": 65519, 00:23:34.754 "ana_reporting": false 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "nvmf_subsystem_add_host", 00:23:34.754 "params": { 00:23:34.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.754 "host": "nqn.2016-06.io.spdk:host1", 00:23:34.754 "psk": "key0" 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "nvmf_subsystem_add_ns", 00:23:34.754 "params": { 00:23:34.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.754 "namespace": { 00:23:34.754 "nsid": 1, 00:23:34.754 "bdev_name": "malloc0", 00:23:34.754 "nguid": "B2935F5809114199BA74312F939BA7B3", 00:23:34.754 "uuid": "b2935f58-0911-4199-ba74-312f939ba7b3", 00:23:34.754 "no_auto_visible": false 00:23:34.754 } 00:23:34.754 } 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "method": "nvmf_subsystem_add_listener", 00:23:34.754 "params": { 00:23:34.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.754 "listen_address": { 00:23:34.754 "trtype": "TCP", 00:23:34.754 "adrfam": "IPv4", 00:23:34.754 "traddr": "10.0.0.2", 00:23:34.754 "trsvcid": "4420" 00:23:34.754 }, 00:23:34.754 "secure_channel": true 00:23:34.754 } 00:23:34.754 } 00:23:34.754 ] 00:23:34.754 } 00:23:34.754 ] 00:23:34.754 }' 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3578379 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3578379 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3578379 ']' 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.754 18:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.754 [2024-07-25 18:56:46.440691] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:34.754 [2024-07-25 18:56:46.440806] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.754 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.754 [2024-07-25 18:56:46.527334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.754 [2024-07-25 18:56:46.615546] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.754 [2024-07-25 18:56:46.615596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.754 [2024-07-25 18:56:46.615622] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.754 [2024-07-25 18:56:46.615636] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.754 [2024-07-25 18:56:46.615648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.754 [2024-07-25 18:56:46.615723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.012 [2024-07-25 18:56:46.849975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.012 [2024-07-25 18:56:46.881990] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.012 [2024-07-25 18:56:46.890281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3578529 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3578529 /var/tmp/bdevperf.sock 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3578529 ']' 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.579 18:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:35.579 "subsystems": [ 00:23:35.579 { 00:23:35.579 "subsystem": "keyring", 00:23:35.579 "config": [ 00:23:35.579 { 00:23:35.579 "method": "keyring_file_add_key", 00:23:35.579 "params": { 00:23:35.579 "name": "key0", 00:23:35.579 "path": "/tmp/tmp.QrbfHRdWSl" 00:23:35.579 } 00:23:35.579 } 00:23:35.579 ] 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "subsystem": "iobuf", 00:23:35.579 "config": [ 00:23:35.579 { 00:23:35.579 "method": "iobuf_set_options", 00:23:35.579 "params": { 00:23:35.579 "small_pool_count": 8192, 00:23:35.579 "large_pool_count": 1024, 00:23:35.579 "small_bufsize": 8192, 00:23:35.579 "large_bufsize": 135168 00:23:35.579 } 00:23:35.579 } 00:23:35.579 ] 00:23:35.579 }, 00:23:35.579 { 00:23:35.579 "subsystem": "sock", 00:23:35.580 "config": [ 00:23:35.580 { 00:23:35.580 "method": "sock_set_default_impl", 00:23:35.580 "params": { 00:23:35.580 "impl_name": "posix" 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "sock_impl_set_options", 00:23:35.580 "params": { 00:23:35.580 "impl_name": "ssl", 00:23:35.580 "recv_buf_size": 4096, 00:23:35.580 "send_buf_size": 4096, 00:23:35.580 "enable_recv_pipe": true, 00:23:35.580 "enable_quickack": false, 00:23:35.580 "enable_placement_id": 0, 00:23:35.580 "enable_zerocopy_send_server": true, 00:23:35.580 "enable_zerocopy_send_client": false, 00:23:35.580 "zerocopy_threshold": 0, 00:23:35.580 "tls_version": 0, 00:23:35.580 "enable_ktls": false 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "sock_impl_set_options", 00:23:35.580 "params": { 00:23:35.580 "impl_name": "posix", 00:23:35.580 "recv_buf_size": 2097152, 00:23:35.580 "send_buf_size": 2097152, 00:23:35.580 "enable_recv_pipe": true, 00:23:35.580 "enable_quickack": false, 00:23:35.580 "enable_placement_id": 0, 00:23:35.580 "enable_zerocopy_send_server": true, 00:23:35.580 "enable_zerocopy_send_client": false, 00:23:35.580 "zerocopy_threshold": 0, 00:23:35.580 "tls_version": 0, 00:23:35.580 "enable_ktls": false 00:23:35.580 } 00:23:35.580 } 00:23:35.580 ] 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "subsystem": "vmd", 00:23:35.580 "config": [] 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "subsystem": "accel", 00:23:35.580 "config": [ 00:23:35.580 { 00:23:35.580 "method": "accel_set_options", 00:23:35.580 "params": { 00:23:35.580 "small_cache_size": 128, 00:23:35.580 "large_cache_size": 16, 00:23:35.580 "task_count": 2048, 00:23:35.580 "sequence_count": 2048, 00:23:35.580 "buf_count": 2048 00:23:35.580 } 00:23:35.580 } 00:23:35.580 ] 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "subsystem": "bdev", 00:23:35.580 "config": [ 00:23:35.580 { 00:23:35.580 "method": "bdev_set_options", 00:23:35.580 "params": { 00:23:35.580 "bdev_io_pool_size": 65535, 00:23:35.580 "bdev_io_cache_size": 256, 00:23:35.580 "bdev_auto_examine": true, 00:23:35.580 "iobuf_small_cache_size": 128, 00:23:35.580 "iobuf_large_cache_size": 16 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "bdev_raid_set_options", 00:23:35.580 "params": { 00:23:35.580 "process_window_size_kb": 1024 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "bdev_iscsi_set_options", 00:23:35.580 "params": { 00:23:35.580 "timeout_sec": 30 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "bdev_nvme_set_options", 00:23:35.580 "params": { 00:23:35.580 "action_on_timeout": "none", 00:23:35.580 "timeout_us": 0, 00:23:35.580 "timeout_admin_us": 0, 00:23:35.580 "keep_alive_timeout_ms": 10000, 00:23:35.580 "arbitration_burst": 0, 00:23:35.580 "low_priority_weight": 0, 00:23:35.580 "medium_priority_weight": 0, 00:23:35.580 "high_priority_weight": 0, 00:23:35.580 "nvme_adminq_poll_period_us": 10000, 00:23:35.580 "nvme_ioq_poll_period_us": 0, 00:23:35.580 "io_queue_requests": 512, 00:23:35.580 "delay_cmd_submit": true, 00:23:35.580 "transport_retry_count": 4, 00:23:35.580 "bdev_retry_count": 3, 00:23:35.580 "transport_ack_timeout": 0, 00:23:35.580 "ctrlr_loss_timeout_sec": 0, 00:23:35.580 "reconnect_delay_sec": 0, 00:23:35.580 "fast_io_fail_timeout_sec": 0, 00:23:35.580 "disable_auto_failback": false, 00:23:35.580 "generate_uuids": false, 00:23:35.580 "transport_tos": 0, 00:23:35.580 "nvme_error_stat": false, 00:23:35.580 "rdma_srq_size": 0, 00:23:35.580 "io_path_stat": false, 00:23:35.580 "allow_accel_sequence": false, 00:23:35.580 "rdma_max_cq_size": 0, 00:23:35.580 "rdma_cm_event_timeout_ms": 0, 00:23:35.580 "dhchap_digests": [ 00:23:35.580 "sha256", 00:23:35.580 "sha384", 00:23:35.580 "sha512" 00:23:35.580 ], 00:23:35.580 "dhchap_dhgroups": [ 00:23:35.580 "null", 00:23:35.580 "ffdhe2048", 00:23:35.580 "ffdhe3072", 00:23:35.580 "ffdhe4096", 00:23:35.580 "ffdhe6144", 00:23:35.580 "ffdhe8192" 00:23:35.580 ] 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "bdev_nvme_attach_controller", 00:23:35.580 "params": { 00:23:35.580 "name": "nvme0", 00:23:35.580 "trtype": "TCP", 00:23:35.580 "adrfam": "IPv4", 00:23:35.580 "traddr": "10.0.0.2", 00:23:35.580 "trsvcid": "4420", 00:23:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.580 "prchk_reftag": false, 00:23:35.580 "prchk_guard": false, 00:23:35.580 "ctrlr_loss_timeout_sec": 0, 00:23:35.580 "reconnect_delay_sec": 0, 00:23:35.580 "fast_io_fail_timeout_sec": 0, 00:23:35.580 "psk": "key0", 00:23:35.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.580 "hdgst": false, 00:23:35.580 "ddgst": false 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "bdev_nvme_set_hotplug", 00:23:35.580 "params": { 00:23:35.580 "period_us": 100000, 00:23:35.580 "enable": false 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "bdev_enable_histogram", 00:23:35.580 "params": { 00:23:35.580 "name": "nvme0n1", 00:23:35.580 "enable": true 00:23:35.580 } 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "method": "bdev_wait_for_examine" 00:23:35.580 } 00:23:35.580 ] 00:23:35.580 }, 00:23:35.580 { 00:23:35.580 "subsystem": "nbd", 00:23:35.580 "config": [] 00:23:35.580 } 00:23:35.580 ] 00:23:35.580 }' 00:23:35.580 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.580 18:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.580 [2024-07-25 18:56:47.423816] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:35.580 [2024-07-25 18:56:47.423905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578529 ] 00:23:35.580 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.840 [2024-07-25 18:56:47.486661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.840 [2024-07-25 18:56:47.576792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.101 [2024-07-25 18:56:47.758577] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.669 18:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:36.669 18:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:36.669 18:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:36.669 18:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:36.927 18:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.927 18:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.927 Running I/O for 1 seconds... 00:23:38.303 00:23:38.303 Latency(us) 00:23:38.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.303 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:38.303 Verification LBA range: start 0x0 length 0x2000 00:23:38.303 nvme0n1 : 1.03 3095.15 12.09 0.00 0.00 40874.32 9272.13 52817.16 00:23:38.304 =================================================================================================================== 00:23:38.304 Total : 3095.15 12.09 0.00 0.00 40874.32 9272.13 52817.16 00:23:38.304 0 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:38.304 nvmf_trace.0 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3578529 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3578529 ']' 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3578529 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3578529 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3578529' 00:23:38.304 killing process with pid 3578529 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3578529 00:23:38.304 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.304 00:23:38.304 Latency(us) 00:23:38.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.304 =================================================================================================================== 00:23:38.304 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.304 18:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3578529 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.304 rmmod nvme_tcp 00:23:38.304 rmmod nvme_fabrics 00:23:38.304 rmmod nvme_keyring 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3578379 ']' 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3578379 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3578379 ']' 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3578379 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:38.304 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3578379 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3578379' 00:23:38.562 killing process with pid 3578379 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3578379 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3578379 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.562 18:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.100 18:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.100 18:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ErXYX3RFJI /tmp/tmp.uMbSgxlyUD /tmp/tmp.QrbfHRdWSl 00:23:41.100 00:23:41.100 real 1m19.211s 00:23:41.100 user 2m8.728s 00:23:41.100 sys 0m25.522s 00:23:41.100 18:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:41.100 18:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.100 ************************************ 00:23:41.100 END TEST nvmf_tls 00:23:41.100 ************************************ 00:23:41.100 18:56:52 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:41.100 18:56:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:41.100 18:56:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:41.100 18:56:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.101 ************************************ 00:23:41.101 START TEST nvmf_fips 00:23:41.101 ************************************ 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:41.101 * Looking for test storage... 00:23:41.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:41.101 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:41.102 Error setting digest 00:23:41.102 00D220A9BA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:41.102 00D220A9BA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.102 18:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:43.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:43.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:43.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:43.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:43.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:23:43.002 00:23:43.002 --- 10.0.0.2 ping statistics --- 00:23:43.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.002 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:43.002 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:23:43.003 00:23:43.003 --- 10.0.0.1 ping statistics --- 00:23:43.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.003 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3580773 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3580773 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3580773 ']' 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:43.003 18:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:43.263 [2024-07-25 18:56:54.896484] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:43.263 [2024-07-25 18:56:54.896571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.263 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.263 [2024-07-25 18:56:54.967285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.263 [2024-07-25 18:56:55.058890] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.263 [2024-07-25 18:56:55.058952] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.263 [2024-07-25 18:56:55.058969] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.263 [2024-07-25 18:56:55.058982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.263 [2024-07-25 18:56:55.058993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.263 [2024-07-25 18:56:55.059024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:43.521 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:43.780 [2024-07-25 18:56:55.462650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.780 [2024-07-25 18:56:55.478607] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.780 [2024-07-25 18:56:55.478834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.780 [2024-07-25 18:56:55.510428] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:43.780 malloc0 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3580920 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3580920 /var/tmp/bdevperf.sock 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3580920 ']' 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:43.780 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:43.780 [2024-07-25 18:56:55.603162] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:43.780 [2024-07-25 18:56:55.603255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580920 ] 00:23:43.780 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.037 [2024-07-25 18:56:55.661917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.038 [2024-07-25 18:56:55.747582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.038 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:44.038 18:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:44.038 18:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:44.296 [2024-07-25 18:56:56.130371] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.296 [2024-07-25 18:56:56.130517] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:44.556 TLSTESTn1 00:23:44.556 18:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.556 Running I/O for 10 seconds... 00:23:54.540 00:23:54.540 Latency(us) 00:23:54.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.540 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.540 Verification LBA range: start 0x0 length 0x2000 00:23:54.540 TLSTESTn1 : 10.03 3097.48 12.10 0.00 0.00 41240.23 8398.32 70293.43 00:23:54.540 =================================================================================================================== 00:23:54.540 Total : 3097.48 12.10 0.00 0.00 41240.23 8398.32 70293.43 00:23:54.540 0 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:54.540 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:54.540 nvmf_trace.0 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3580920 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3580920 ']' 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3580920 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3580920 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3580920' 00:23:54.800 killing process with pid 3580920 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3580920 00:23:54.800 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.800 00:23:54.800 Latency(us) 00:23:54.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.800 =================================================================================================================== 00:23:54.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.800 [2024-07-25 18:57:06.503505] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:54.800 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3580920 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.060 rmmod nvme_tcp 00:23:55.060 rmmod nvme_fabrics 00:23:55.060 rmmod nvme_keyring 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3580773 ']' 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3580773 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3580773 ']' 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3580773 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3580773 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3580773' 00:23:55.060 killing process with pid 3580773 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3580773 00:23:55.060 [2024-07-25 18:57:06.829143] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:55.060 18:57:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3580773 00:23:55.318 18:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.318 18:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.318 18:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.318 18:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.318 18:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.318 18:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.318 18:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.318 18:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:57.848 00:23:57.848 real 0m16.604s 00:23:57.848 user 0m17.538s 00:23:57.848 sys 0m7.009s 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:57.848 ************************************ 00:23:57.848 END TEST nvmf_fips 00:23:57.848 ************************************ 00:23:57.848 18:57:09 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:57.848 18:57:09 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:57.848 18:57:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:57.848 18:57:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:57.848 18:57:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:57.848 ************************************ 00:23:57.848 START TEST nvmf_fuzz 00:23:57.848 ************************************ 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:57.848 * Looking for test storage... 00:23:57.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.848 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:57.849 18:57:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.753 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:59.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:59.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:59.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:59.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:23:59.754 00:23:59.754 --- 10.0.0.2 ping statistics --- 00:23:59.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.754 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:23:59.754 00:23:59.754 --- 10.0.0.1 ping statistics --- 00:23:59.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.754 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3584781 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3584781 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3584781 ']' 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.754 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:00.050 Malloc0 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:00.050 18:57:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:32.135 Fuzzing completed. Shutting down the fuzz application 00:24:32.135 00:24:32.135 Dumping successful admin opcodes: 00:24:32.135 8, 9, 10, 24, 00:24:32.135 Dumping successful io opcodes: 00:24:32.135 0, 9, 00:24:32.135 NS: 0x200003aeff00 I/O qp, Total commands completed: 443139, total successful commands: 2579, random_seed: 2209936640 00:24:32.135 NS: 0x200003aeff00 admin qp, Total commands completed: 55391, total successful commands: 443, random_seed: 3126113216 00:24:32.135 18:57:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:32.135 Fuzzing completed. Shutting down the fuzz application 00:24:32.135 00:24:32.135 Dumping successful admin opcodes: 00:24:32.135 24, 00:24:32.135 Dumping successful io opcodes: 00:24:32.135 00:24:32.135 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3725578878 00:24:32.135 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3725743987 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:32.135 rmmod nvme_tcp 00:24:32.135 rmmod nvme_fabrics 00:24:32.135 rmmod nvme_keyring 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3584781 ']' 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3584781 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3584781 ']' 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 3584781 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3584781 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3584781' 00:24:32.135 killing process with pid 3584781 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 3584781 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 3584781 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.135 18:57:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.045 18:57:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.045 18:57:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:34.045 00:24:34.045 real 0m36.738s 00:24:34.045 user 0m50.042s 00:24:34.045 sys 0m15.556s 00:24:34.045 18:57:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:34.045 18:57:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 ************************************ 00:24:34.045 END TEST nvmf_fuzz 00:24:34.045 ************************************ 00:24:34.045 18:57:45 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:34.045 18:57:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:34.045 18:57:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:34.045 18:57:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:34.304 ************************************ 00:24:34.304 START TEST nvmf_multiconnection 00:24:34.304 ************************************ 00:24:34.304 18:57:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:34.304 * Looking for test storage... 00:24:34.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:34.304 18:57:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.304 18:57:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:34.304 18:57:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.304 18:57:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.304 18:57:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.304 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.305 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.305 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.305 18:57:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.305 18:57:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.305 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.305 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.305 18:57:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.305 18:57:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:36.210 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:36.210 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:36.210 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:36.211 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:36.211 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.211 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:36.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:24:36.470 00:24:36.470 --- 10.0.0.2 ping statistics --- 00:24:36.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.470 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:36.470 00:24:36.470 --- 10.0.0.1 ping statistics --- 00:24:36.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.470 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3590397 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3590397 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 3590397 ']' 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:36.470 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.470 [2024-07-25 18:57:48.220855] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:36.470 [2024-07-25 18:57:48.220932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.470 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.470 [2024-07-25 18:57:48.289801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:36.729 [2024-07-25 18:57:48.382706] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.729 [2024-07-25 18:57:48.382761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.729 [2024-07-25 18:57:48.382788] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.729 [2024-07-25 18:57:48.382802] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.729 [2024-07-25 18:57:48.382814] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.729 [2024-07-25 18:57:48.382898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.729 [2024-07-25 18:57:48.382966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.729 [2024-07-25 18:57:48.383101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.729 [2024-07-25 18:57:48.383096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.729 [2024-07-25 18:57:48.540714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.729 Malloc1 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.729 [2024-07-25 18:57:48.596257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.729 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.987 Malloc2 00:24:36.987 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.987 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 Malloc3 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 Malloc4 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 Malloc5 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 Malloc6 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 Malloc7 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.988 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 Malloc8 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 Malloc9 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 Malloc10 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 Malloc11 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.247 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.248 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:37.248 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.248 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.248 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.248 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:37.248 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.248 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:37.815 18:57:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:37.815 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:37.815 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.815 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:37.815 18:57:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:40.346 18:57:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:40.346 18:57:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:40.346 18:57:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:40.346 18:57:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:40.346 18:57:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.346 18:57:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:40.346 18:57:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.346 18:57:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:40.604 18:57:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:40.604 18:57:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:40.604 18:57:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.604 18:57:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:40.604 18:57:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:43.154 18:57:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:43.154 18:57:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:43.154 18:57:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:43.154 18:57:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:43.154 18:57:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.154 18:57:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:43.154 18:57:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.154 18:57:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:43.415 18:57:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:43.415 18:57:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:43.415 18:57:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.415 18:57:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:43.415 18:57:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:45.317 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:45.317 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:45.317 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:45.317 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:45.317 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.317 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:45.317 18:57:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.317 18:57:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:46.250 18:57:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:46.250 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:46.250 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.250 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:46.250 18:57:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:48.203 18:57:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:48.203 18:57:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:48.203 18:57:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:48.203 18:57:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:48.203 18:57:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.203 18:57:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:48.203 18:57:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.203 18:57:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:48.771 18:58:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:48.771 18:58:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:48.772 18:58:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.772 18:58:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:48.772 18:58:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:50.672 18:58:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:50.672 18:58:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:50.672 18:58:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:50.672 18:58:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:50.672 18:58:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.672 18:58:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:50.672 18:58:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.672 18:58:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:51.607 18:58:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:51.607 18:58:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:51.607 18:58:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:51.607 18:58:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:51.607 18:58:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:53.507 18:58:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:53.507 18:58:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:53.507 18:58:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:53.507 18:58:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:53.507 18:58:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:53.507 18:58:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:53.507 18:58:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.507 18:58:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:54.445 18:58:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:54.445 18:58:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:54.445 18:58:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:54.445 18:58:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:54.445 18:58:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:56.345 18:58:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:56.345 18:58:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:56.345 18:58:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:56.345 18:58:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:56.345 18:58:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.345 18:58:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:56.345 18:58:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.345 18:58:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:57.279 18:58:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:57.279 18:58:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:57.279 18:58:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.279 18:58:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:57.279 18:58:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:59.813 18:58:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:59.813 18:58:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:59.813 18:58:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:59.813 18:58:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:59.813 18:58:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.813 18:58:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:59.813 18:58:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.813 18:58:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:00.379 18:58:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:00.379 18:58:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:00.379 18:58:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.379 18:58:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:00.379 18:58:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:02.285 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:02.285 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:02.285 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:02.285 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:02.285 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:02.285 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:02.285 18:58:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.285 18:58:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:03.220 18:58:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:03.220 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:03.220 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:03.220 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:03.220 18:58:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:05.122 18:58:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:05.122 18:58:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:05.122 18:58:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:05.122 18:58:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:05.122 18:58:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:05.122 18:58:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:05.122 18:58:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:05.122 18:58:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:06.063 18:58:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:06.063 18:58:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:06.063 18:58:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.063 18:58:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:06.063 18:58:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:07.965 18:58:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:07.965 18:58:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:07.965 18:58:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:07.965 18:58:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:07.965 18:58:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.965 18:58:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:07.965 18:58:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:07.965 [global] 00:25:07.965 thread=1 00:25:07.965 invalidate=1 00:25:07.965 rw=read 00:25:07.965 time_based=1 00:25:07.965 runtime=10 00:25:07.965 ioengine=libaio 00:25:07.965 direct=1 00:25:07.965 bs=262144 00:25:07.965 iodepth=64 00:25:07.965 norandommap=1 00:25:07.965 numjobs=1 00:25:07.965 00:25:07.965 [job0] 00:25:07.965 filename=/dev/nvme0n1 00:25:07.965 [job1] 00:25:07.965 filename=/dev/nvme10n1 00:25:07.965 [job2] 00:25:07.965 filename=/dev/nvme1n1 00:25:07.965 [job3] 00:25:07.965 filename=/dev/nvme2n1 00:25:07.965 [job4] 00:25:07.965 filename=/dev/nvme3n1 00:25:07.965 [job5] 00:25:07.965 filename=/dev/nvme4n1 00:25:07.965 [job6] 00:25:07.965 filename=/dev/nvme5n1 00:25:07.965 [job7] 00:25:07.965 filename=/dev/nvme6n1 00:25:07.965 [job8] 00:25:07.965 filename=/dev/nvme7n1 00:25:07.965 [job9] 00:25:07.965 filename=/dev/nvme8n1 00:25:07.965 [job10] 00:25:07.965 filename=/dev/nvme9n1 00:25:08.223 Could not set queue depth (nvme0n1) 00:25:08.223 Could not set queue depth (nvme10n1) 00:25:08.223 Could not set queue depth (nvme1n1) 00:25:08.223 Could not set queue depth (nvme2n1) 00:25:08.223 Could not set queue depth (nvme3n1) 00:25:08.223 Could not set queue depth (nvme4n1) 00:25:08.223 Could not set queue depth (nvme5n1) 00:25:08.223 Could not set queue depth (nvme6n1) 00:25:08.223 Could not set queue depth (nvme7n1) 00:25:08.223 Could not set queue depth (nvme8n1) 00:25:08.223 Could not set queue depth (nvme9n1) 00:25:08.223 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:08.223 fio-3.35 00:25:08.223 Starting 11 threads 00:25:20.476 00:25:20.476 job0: (groupid=0, jobs=1): err= 0: pid=3594658: Thu Jul 25 18:58:30 2024 00:25:20.476 read: IOPS=476, BW=119MiB/s (125MB/s)(1204MiB/10108msec) 00:25:20.476 slat (usec): min=11, max=79383, avg=1972.88, stdev=6174.07 00:25:20.476 clat (msec): min=3, max=280, avg=132.31, stdev=42.22 00:25:20.476 lat (msec): min=3, max=280, avg=134.28, stdev=43.11 00:25:20.476 clat percentiles (msec): 00:25:20.476 | 1.00th=[ 16], 5.00th=[ 64], 10.00th=[ 77], 20.00th=[ 103], 00:25:20.476 | 30.00th=[ 116], 40.00th=[ 124], 50.00th=[ 133], 60.00th=[ 144], 00:25:20.476 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 184], 95.00th=[ 199], 00:25:20.476 | 99.00th=[ 218], 99.50th=[ 241], 99.90th=[ 268], 99.95th=[ 268], 00:25:20.476 | 99.99th=[ 279] 00:25:20.476 bw ( KiB/s): min=80896, max=181760, per=6.16%, avg=121613.00, stdev=29387.96, samples=20 00:25:20.476 iops : min= 316, max= 710, avg=475.05, stdev=114.80, samples=20 00:25:20.476 lat (msec) : 4=0.04%, 10=0.69%, 20=0.91%, 50=2.29%, 100=15.16% 00:25:20.476 lat (msec) : 250=80.58%, 500=0.33% 00:25:20.476 cpu : usr=0.40%, sys=1.49%, ctx=984, majf=0, minf=4097 00:25:20.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:20.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.476 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.476 job1: (groupid=0, jobs=1): err= 0: pid=3594659: Thu Jul 25 18:58:30 2024 00:25:20.476 read: IOPS=757, BW=189MiB/s (199MB/s)(1908MiB/10078msec) 00:25:20.476 slat (usec): min=10, max=130389, avg=1169.76, stdev=4044.45 00:25:20.476 clat (msec): min=2, max=299, avg=83.29, stdev=34.82 00:25:20.476 lat (msec): min=2, max=299, avg=84.46, stdev=35.34 00:25:20.476 clat percentiles (msec): 00:25:20.476 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 51], 00:25:20.476 | 30.00th=[ 66], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 90], 00:25:20.476 | 70.00th=[ 100], 80.00th=[ 112], 90.00th=[ 130], 95.00th=[ 144], 00:25:20.476 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 207], 99.95th=[ 207], 00:25:20.476 | 99.99th=[ 300] 00:25:20.476 bw ( KiB/s): min=110080, max=404480, per=9.81%, avg=193753.55, stdev=69055.11, samples=20 00:25:20.476 iops : min= 430, max= 1580, avg=756.80, stdev=269.80, samples=20 00:25:20.476 lat (msec) : 4=0.05%, 10=0.77%, 20=1.28%, 50=17.74%, 100=50.43% 00:25:20.476 lat (msec) : 250=29.70%, 500=0.01% 00:25:20.476 cpu : usr=0.55%, sys=2.16%, ctx=1467, majf=0, minf=4097 00:25:20.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:20.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.476 issued rwts: total=7632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.476 job2: (groupid=0, jobs=1): err= 0: pid=3594660: Thu Jul 25 18:58:30 2024 00:25:20.476 read: IOPS=496, BW=124MiB/s (130MB/s)(1255MiB/10102msec) 00:25:20.476 slat (usec): min=13, max=57998, avg=1643.45, stdev=4900.30 00:25:20.476 clat (msec): min=29, max=253, avg=127.05, stdev=43.27 00:25:20.476 lat (msec): min=29, max=264, avg=128.70, stdev=43.82 00:25:20.476 clat percentiles (msec): 00:25:20.476 | 1.00th=[ 35], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 92], 00:25:20.476 | 30.00th=[ 110], 40.00th=[ 118], 50.00th=[ 126], 60.00th=[ 138], 00:25:20.476 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 194], 00:25:20.476 | 99.00th=[ 211], 99.50th=[ 220], 99.90th=[ 230], 99.95th=[ 236], 00:25:20.476 | 99.99th=[ 253] 00:25:20.476 bw ( KiB/s): min=82432, max=305664, per=6.42%, avg=126876.90, stdev=49687.69, samples=20 00:25:20.476 iops : min= 322, max= 1194, avg=495.55, stdev=194.07, samples=20 00:25:20.476 lat (msec) : 50=4.08%, 100=19.16%, 250=76.73%, 500=0.02% 00:25:20.476 cpu : usr=0.34%, sys=1.78%, ctx=1101, majf=0, minf=4097 00:25:20.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:20.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.476 issued rwts: total=5020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.476 job3: (groupid=0, jobs=1): err= 0: pid=3594661: Thu Jul 25 18:58:30 2024 00:25:20.476 read: IOPS=784, BW=196MiB/s (206MB/s)(1971MiB/10048msec) 00:25:20.476 slat (usec): min=9, max=158620, avg=1097.09, stdev=4743.77 00:25:20.476 clat (msec): min=2, max=345, avg=80.41, stdev=47.55 00:25:20.476 lat (msec): min=2, max=345, avg=81.51, stdev=48.34 00:25:20.476 clat percentiles (msec): 00:25:20.476 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 41], 00:25:20.476 | 30.00th=[ 52], 40.00th=[ 62], 50.00th=[ 73], 60.00th=[ 82], 00:25:20.476 | 70.00th=[ 91], 80.00th=[ 111], 90.00th=[ 165], 95.00th=[ 184], 00:25:20.476 | 99.00th=[ 209], 99.50th=[ 226], 99.90th=[ 249], 99.95th=[ 249], 00:25:20.476 | 99.99th=[ 347] 00:25:20.476 bw ( KiB/s): min=82432, max=396288, per=10.14%, avg=200189.85, stdev=75648.68, samples=20 00:25:20.476 iops : min= 322, max= 1548, avg=781.95, stdev=295.46, samples=20 00:25:20.476 lat (msec) : 4=0.23%, 10=1.09%, 20=3.60%, 50=24.00%, 100=47.39% 00:25:20.476 lat (msec) : 250=23.68%, 500=0.01% 00:25:20.476 cpu : usr=0.34%, sys=2.51%, ctx=1514, majf=0, minf=4097 00:25:20.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:20.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.476 issued rwts: total=7884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.476 job4: (groupid=0, jobs=1): err= 0: pid=3594662: Thu Jul 25 18:58:30 2024 00:25:20.476 read: IOPS=1012, BW=253MiB/s (265MB/s)(2550MiB/10075msec) 00:25:20.476 slat (usec): min=10, max=73121, avg=843.84, stdev=3094.49 00:25:20.476 clat (msec): min=4, max=241, avg=62.32, stdev=36.13 00:25:20.476 lat (msec): min=4, max=241, avg=63.17, stdev=36.56 00:25:20.476 clat percentiles (msec): 00:25:20.476 | 1.00th=[ 15], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 32], 00:25:20.476 | 30.00th=[ 37], 40.00th=[ 48], 50.00th=[ 56], 60.00th=[ 64], 00:25:20.476 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 102], 95.00th=[ 144], 00:25:20.476 | 99.00th=[ 188], 99.50th=[ 211], 99.90th=[ 228], 99.95th=[ 234], 00:25:20.476 | 99.99th=[ 241] 00:25:20.476 bw ( KiB/s): min=93696, max=484352, per=13.14%, avg=259516.25, stdev=99466.53, samples=20 00:25:20.476 iops : min= 366, max= 1892, avg=1013.70, stdev=388.58, samples=20 00:25:20.476 lat (msec) : 10=0.53%, 20=1.56%, 50=40.42%, 100=47.15%, 250=10.34% 00:25:20.476 cpu : usr=0.58%, sys=2.86%, ctx=1826, majf=0, minf=4097 00:25:20.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:20.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.477 issued rwts: total=10201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.477 job5: (groupid=0, jobs=1): err= 0: pid=3594663: Thu Jul 25 18:58:30 2024 00:25:20.477 read: IOPS=702, BW=176MiB/s (184MB/s)(1766MiB/10046msec) 00:25:20.477 slat (usec): min=9, max=136718, avg=927.56, stdev=5022.76 00:25:20.477 clat (msec): min=3, max=319, avg=90.06, stdev=49.30 00:25:20.477 lat (msec): min=3, max=319, avg=90.98, stdev=50.04 00:25:20.477 clat percentiles (msec): 00:25:20.477 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 30], 20.00th=[ 46], 00:25:20.477 | 30.00th=[ 62], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 94], 00:25:20.477 | 70.00th=[ 109], 80.00th=[ 140], 90.00th=[ 165], 95.00th=[ 182], 00:25:20.477 | 99.00th=[ 205], 99.50th=[ 213], 99.90th=[ 239], 99.95th=[ 251], 00:25:20.477 | 99.99th=[ 321] 00:25:20.477 bw ( KiB/s): min=102400, max=271360, per=9.07%, avg=179157.80, stdev=47465.38, samples=20 00:25:20.477 iops : min= 400, max= 1060, avg=699.80, stdev=185.42, samples=20 00:25:20.477 lat (msec) : 4=0.01%, 10=1.08%, 20=4.40%, 50=17.12%, 100=42.03% 00:25:20.477 lat (msec) : 250=35.30%, 500=0.06% 00:25:20.477 cpu : usr=0.25%, sys=1.61%, ctx=1549, majf=0, minf=3721 00:25:20.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:20.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.477 issued rwts: total=7062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.477 job6: (groupid=0, jobs=1): err= 0: pid=3594664: Thu Jul 25 18:58:30 2024 00:25:20.477 read: IOPS=594, BW=149MiB/s (156MB/s)(1493MiB/10051msec) 00:25:20.477 slat (usec): min=10, max=72568, avg=1159.99, stdev=4413.31 00:25:20.477 clat (usec): min=1505, max=260555, avg=106499.19, stdev=46844.87 00:25:20.477 lat (usec): min=1535, max=269003, avg=107659.18, stdev=47522.92 00:25:20.477 clat percentiles (msec): 00:25:20.477 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 51], 20.00th=[ 71], 00:25:20.477 | 30.00th=[ 80], 40.00th=[ 88], 50.00th=[ 97], 60.00th=[ 112], 00:25:20.477 | 70.00th=[ 128], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 192], 00:25:20.477 | 99.00th=[ 213], 99.50th=[ 220], 99.90th=[ 251], 99.95th=[ 262], 00:25:20.477 | 99.99th=[ 262] 00:25:20.477 bw ( KiB/s): min=81408, max=230451, per=7.66%, avg=151221.75, stdev=46040.07, samples=20 00:25:20.477 iops : min= 318, max= 900, avg=590.70, stdev=179.83, samples=20 00:25:20.477 lat (msec) : 2=0.02%, 4=0.05%, 10=0.47%, 20=0.92%, 50=8.47% 00:25:20.477 lat (msec) : 100=41.63%, 250=48.33%, 500=0.10% 00:25:20.477 cpu : usr=0.34%, sys=1.96%, ctx=1443, majf=0, minf=4097 00:25:20.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:20.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.477 issued rwts: total=5971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.477 job7: (groupid=0, jobs=1): err= 0: pid=3594665: Thu Jul 25 18:58:30 2024 00:25:20.477 read: IOPS=822, BW=206MiB/s (216MB/s)(2071MiB/10076msec) 00:25:20.477 slat (usec): min=9, max=97174, avg=1023.12, stdev=3927.30 00:25:20.477 clat (msec): min=3, max=234, avg=76.76, stdev=49.36 00:25:20.477 lat (msec): min=3, max=270, avg=77.78, stdev=50.09 00:25:20.477 clat percentiles (msec): 00:25:20.477 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 29], 20.00th=[ 32], 00:25:20.477 | 30.00th=[ 36], 40.00th=[ 49], 50.00th=[ 66], 60.00th=[ 83], 00:25:20.477 | 70.00th=[ 97], 80.00th=[ 126], 90.00th=[ 157], 95.00th=[ 174], 00:25:20.477 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 222], 99.95th=[ 234], 00:25:20.477 | 99.99th=[ 234] 00:25:20.477 bw ( KiB/s): min=91136, max=477184, per=10.66%, avg=210459.45, stdev=106387.86, samples=20 00:25:20.477 iops : min= 356, max= 1864, avg=822.10, stdev=415.58, samples=20 00:25:20.477 lat (msec) : 4=0.02%, 10=2.21%, 20=3.42%, 50=35.25%, 100=30.53% 00:25:20.477 lat (msec) : 250=28.57% 00:25:20.477 cpu : usr=0.39%, sys=2.37%, ctx=1584, majf=0, minf=4097 00:25:20.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:20.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.477 issued rwts: total=8284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.477 job8: (groupid=0, jobs=1): err= 0: pid=3594666: Thu Jul 25 18:58:30 2024 00:25:20.477 read: IOPS=529, BW=132MiB/s (139MB/s)(1334MiB/10079msec) 00:25:20.477 slat (usec): min=10, max=126698, avg=1498.80, stdev=6096.08 00:25:20.477 clat (usec): min=964, max=312166, avg=119289.83, stdev=60006.58 00:25:20.477 lat (usec): min=985, max=312213, avg=120788.62, stdev=61028.02 00:25:20.477 clat percentiles (msec): 00:25:20.477 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 24], 20.00th=[ 47], 00:25:20.477 | 30.00th=[ 95], 40.00th=[ 121], 50.00th=[ 131], 60.00th=[ 146], 00:25:20.477 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 190], 95.00th=[ 201], 00:25:20.477 | 99.00th=[ 215], 99.50th=[ 220], 99.90th=[ 253], 99.95th=[ 292], 00:25:20.477 | 99.99th=[ 313] 00:25:20.477 bw ( KiB/s): min=76800, max=311296, per=6.84%, avg=135005.70, stdev=57868.76, samples=20 00:25:20.477 iops : min= 300, max= 1216, avg=527.35, stdev=226.06, samples=20 00:25:20.477 lat (usec) : 1000=0.04% 00:25:20.477 lat (msec) : 2=0.19%, 4=0.86%, 10=2.60%, 20=4.98%, 50=11.69% 00:25:20.477 lat (msec) : 100=11.34%, 250=68.18%, 500=0.11% 00:25:20.477 cpu : usr=0.33%, sys=1.45%, ctx=1185, majf=0, minf=4097 00:25:20.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:20.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.477 issued rwts: total=5337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.477 job9: (groupid=0, jobs=1): err= 0: pid=3594667: Thu Jul 25 18:58:30 2024 00:25:20.477 read: IOPS=839, BW=210MiB/s (220MB/s)(2121MiB/10107msec) 00:25:20.477 slat (usec): min=10, max=90838, avg=1065.76, stdev=3825.78 00:25:20.477 clat (msec): min=2, max=264, avg=75.13, stdev=41.96 00:25:20.477 lat (msec): min=2, max=264, avg=76.20, stdev=42.57 00:25:20.477 clat percentiles (msec): 00:25:20.477 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 29], 20.00th=[ 34], 00:25:20.477 | 30.00th=[ 51], 40.00th=[ 66], 50.00th=[ 75], 60.00th=[ 81], 00:25:20.477 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 125], 95.00th=[ 165], 00:25:20.477 | 99.00th=[ 201], 99.50th=[ 224], 99.90th=[ 255], 99.95th=[ 266], 00:25:20.477 | 99.99th=[ 266] 00:25:20.477 bw ( KiB/s): min=95232, max=464384, per=10.91%, avg=215485.50, stdev=95392.40, samples=20 00:25:20.477 iops : min= 372, max= 1814, avg=841.70, stdev=372.54, samples=20 00:25:20.477 lat (msec) : 4=0.20%, 10=2.57%, 20=2.18%, 50=24.49%, 100=50.84% 00:25:20.477 lat (msec) : 250=19.59%, 500=0.13% 00:25:20.477 cpu : usr=0.47%, sys=2.52%, ctx=1612, majf=0, minf=4097 00:25:20.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:20.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.477 issued rwts: total=8482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.477 job10: (groupid=0, jobs=1): err= 0: pid=3594668: Thu Jul 25 18:58:30 2024 00:25:20.477 read: IOPS=721, BW=180MiB/s (189MB/s)(1823MiB/10108msec) 00:25:20.477 slat (usec): min=14, max=105405, avg=995.34, stdev=4240.45 00:25:20.477 clat (msec): min=2, max=284, avg=87.68, stdev=46.17 00:25:20.477 lat (msec): min=2, max=284, avg=88.68, stdev=46.75 00:25:20.477 clat percentiles (msec): 00:25:20.477 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 49], 00:25:20.477 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 79], 60.00th=[ 92], 00:25:20.477 | 70.00th=[ 108], 80.00th=[ 129], 90.00th=[ 159], 95.00th=[ 180], 00:25:20.477 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 257], 99.95th=[ 279], 00:25:20.477 | 99.99th=[ 284] 00:25:20.477 bw ( KiB/s): min=84480, max=314880, per=9.37%, avg=185006.60, stdev=63773.49, samples=20 00:25:20.477 iops : min= 330, max= 1230, avg=722.65, stdev=249.11, samples=20 00:25:20.477 lat (msec) : 4=0.18%, 10=1.37%, 20=2.04%, 50=17.41%, 100=44.44% 00:25:20.477 lat (msec) : 250=34.43%, 500=0.12% 00:25:20.477 cpu : usr=0.44%, sys=2.40%, ctx=1551, majf=0, minf=4097 00:25:20.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:20.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:20.477 issued rwts: total=7290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:20.477 00:25:20.477 Run status group 0 (all jobs): 00:25:20.477 READ: bw=1929MiB/s (2022MB/s), 119MiB/s-253MiB/s (125MB/s-265MB/s), io=19.0GiB (20.4GB), run=10046-10108msec 00:25:20.477 00:25:20.477 Disk stats (read/write): 00:25:20.477 nvme0n1: ios=9447/0, merge=0/0, ticks=1226812/0, in_queue=1226812, util=97.15% 00:25:20.477 nvme10n1: ios=15053/0, merge=0/0, ticks=1239589/0, in_queue=1239589, util=97.37% 00:25:20.478 nvme1n1: ios=9842/0, merge=0/0, ticks=1230151/0, in_queue=1230151, util=97.64% 00:25:20.478 nvme2n1: ios=15530/0, merge=0/0, ticks=1239630/0, in_queue=1239630, util=97.79% 00:25:20.478 nvme3n1: ios=20205/0, merge=0/0, ticks=1233735/0, in_queue=1233735, util=97.87% 00:25:20.478 nvme4n1: ios=13762/0, merge=0/0, ticks=1244412/0, in_queue=1244412, util=98.21% 00:25:20.478 nvme5n1: ios=11711/0, merge=0/0, ticks=1240414/0, in_queue=1240414, util=98.38% 00:25:20.478 nvme6n1: ios=16341/0, merge=0/0, ticks=1233049/0, in_queue=1233049, util=98.49% 00:25:20.478 nvme7n1: ios=10446/0, merge=0/0, ticks=1236222/0, in_queue=1236222, util=98.90% 00:25:20.478 nvme8n1: ios=16836/0, merge=0/0, ticks=1234606/0, in_queue=1234606, util=99.10% 00:25:20.478 nvme9n1: ios=14350/0, merge=0/0, ticks=1234346/0, in_queue=1234346, util=99.21% 00:25:20.478 18:58:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:20.478 [global] 00:25:20.478 thread=1 00:25:20.478 invalidate=1 00:25:20.478 rw=randwrite 00:25:20.478 time_based=1 00:25:20.478 runtime=10 00:25:20.478 ioengine=libaio 00:25:20.478 direct=1 00:25:20.478 bs=262144 00:25:20.478 iodepth=64 00:25:20.478 norandommap=1 00:25:20.478 numjobs=1 00:25:20.478 00:25:20.478 [job0] 00:25:20.478 filename=/dev/nvme0n1 00:25:20.478 [job1] 00:25:20.478 filename=/dev/nvme10n1 00:25:20.478 [job2] 00:25:20.478 filename=/dev/nvme1n1 00:25:20.478 [job3] 00:25:20.478 filename=/dev/nvme2n1 00:25:20.478 [job4] 00:25:20.478 filename=/dev/nvme3n1 00:25:20.478 [job5] 00:25:20.478 filename=/dev/nvme4n1 00:25:20.478 [job6] 00:25:20.478 filename=/dev/nvme5n1 00:25:20.478 [job7] 00:25:20.478 filename=/dev/nvme6n1 00:25:20.478 [job8] 00:25:20.478 filename=/dev/nvme7n1 00:25:20.478 [job9] 00:25:20.478 filename=/dev/nvme8n1 00:25:20.478 [job10] 00:25:20.478 filename=/dev/nvme9n1 00:25:20.478 Could not set queue depth (nvme0n1) 00:25:20.478 Could not set queue depth (nvme10n1) 00:25:20.478 Could not set queue depth (nvme1n1) 00:25:20.478 Could not set queue depth (nvme2n1) 00:25:20.478 Could not set queue depth (nvme3n1) 00:25:20.478 Could not set queue depth (nvme4n1) 00:25:20.478 Could not set queue depth (nvme5n1) 00:25:20.478 Could not set queue depth (nvme6n1) 00:25:20.478 Could not set queue depth (nvme7n1) 00:25:20.478 Could not set queue depth (nvme8n1) 00:25:20.478 Could not set queue depth (nvme9n1) 00:25:20.478 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.478 fio-3.35 00:25:20.478 Starting 11 threads 00:25:30.500 00:25:30.500 job0: (groupid=0, jobs=1): err= 0: pid=3595840: Thu Jul 25 18:58:41 2024 00:25:30.500 write: IOPS=594, BW=149MiB/s (156MB/s)(1497MiB/10074msec); 0 zone resets 00:25:30.501 slat (usec): min=18, max=84760, avg=1300.31, stdev=3498.48 00:25:30.501 clat (usec): min=1153, max=235671, avg=106240.76, stdev=53579.12 00:25:30.501 lat (usec): min=1234, max=247611, avg=107541.07, stdev=54271.58 00:25:30.501 clat percentiles (msec): 00:25:30.501 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 41], 20.00th=[ 45], 00:25:30.501 | 30.00th=[ 70], 40.00th=[ 95], 50.00th=[ 112], 60.00th=[ 124], 00:25:30.501 | 70.00th=[ 140], 80.00th=[ 153], 90.00th=[ 180], 95.00th=[ 194], 00:25:30.501 | 99.00th=[ 218], 99.50th=[ 228], 99.90th=[ 236], 99.95th=[ 236], 00:25:30.501 | 99.99th=[ 236] 00:25:30.501 bw ( KiB/s): min=90624, max=306688, per=10.20%, avg=151693.30, stdev=54769.06, samples=20 00:25:30.501 iops : min= 354, max= 1198, avg=592.55, stdev=213.94, samples=20 00:25:30.501 lat (msec) : 2=0.12%, 4=0.23%, 10=1.45%, 20=3.16%, 50=19.34% 00:25:30.501 lat (msec) : 100=17.80%, 250=57.90% 00:25:30.501 cpu : usr=1.70%, sys=1.90%, ctx=2834, majf=0, minf=1 00:25:30.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.501 issued rwts: total=0,5988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.501 job1: (groupid=0, jobs=1): err= 0: pid=3595852: Thu Jul 25 18:58:41 2024 00:25:30.501 write: IOPS=731, BW=183MiB/s (192MB/s)(1860MiB/10175msec); 0 zone resets 00:25:30.501 slat (usec): min=15, max=106372, avg=838.29, stdev=2867.35 00:25:30.501 clat (usec): min=1077, max=310621, avg=86658.76, stdev=57660.49 00:25:30.501 lat (usec): min=1117, max=310659, avg=87497.05, stdev=58115.37 00:25:30.501 clat percentiles (msec): 00:25:30.501 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 39], 00:25:30.501 | 30.00th=[ 42], 40.00th=[ 52], 50.00th=[ 80], 60.00th=[ 91], 00:25:30.501 | 70.00th=[ 111], 80.00th=[ 130], 90.00th=[ 163], 95.00th=[ 192], 00:25:30.501 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 309], 00:25:30.501 | 99.99th=[ 313] 00:25:30.501 bw ( KiB/s): min=96768, max=419328, per=12.69%, avg=188810.05, stdev=89091.89, samples=20 00:25:30.501 iops : min= 378, max= 1638, avg=737.50, stdev=348.06, samples=20 00:25:30.501 lat (msec) : 2=0.17%, 4=0.42%, 10=1.40%, 20=3.09%, 50=33.94% 00:25:30.501 lat (msec) : 100=23.98%, 250=35.06%, 500=1.94% 00:25:30.501 cpu : usr=2.18%, sys=2.32%, ctx=4185, majf=0, minf=1 00:25:30.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.501 issued rwts: total=0,7439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.501 job2: (groupid=0, jobs=1): err= 0: pid=3595853: Thu Jul 25 18:58:41 2024 00:25:30.501 write: IOPS=411, BW=103MiB/s (108MB/s)(1045MiB/10169msec); 0 zone resets 00:25:30.501 slat (usec): min=16, max=58654, avg=2065.89, stdev=4739.72 00:25:30.501 clat (msec): min=2, max=516, avg=153.55, stdev=76.31 00:25:30.501 lat (msec): min=2, max=516, avg=155.62, stdev=77.31 00:25:30.501 clat percentiles (msec): 00:25:30.501 | 1.00th=[ 8], 5.00th=[ 52], 10.00th=[ 91], 20.00th=[ 104], 00:25:30.501 | 30.00th=[ 117], 40.00th=[ 129], 50.00th=[ 138], 60.00th=[ 153], 00:25:30.501 | 70.00th=[ 169], 80.00th=[ 192], 90.00th=[ 243], 95.00th=[ 309], 00:25:30.501 | 99.00th=[ 456], 99.50th=[ 493], 99.90th=[ 510], 99.95th=[ 518], 00:25:30.501 | 99.99th=[ 518] 00:25:30.501 bw ( KiB/s): min=38912, max=163840, per=7.08%, avg=105403.20, stdev=35266.34, samples=20 00:25:30.501 iops : min= 152, max= 640, avg=411.70, stdev=137.78, samples=20 00:25:30.501 lat (msec) : 4=0.10%, 10=1.48%, 20=0.74%, 50=2.46%, 100=13.73% 00:25:30.501 lat (msec) : 250=72.06%, 500=9.07%, 750=0.36% 00:25:30.501 cpu : usr=1.24%, sys=1.30%, ctx=1641, majf=0, minf=1 00:25:30.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.501 issued rwts: total=0,4180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.501 job3: (groupid=0, jobs=1): err= 0: pid=3595854: Thu Jul 25 18:58:41 2024 00:25:30.501 write: IOPS=464, BW=116MiB/s (122MB/s)(1182MiB/10174msec); 0 zone resets 00:25:30.501 slat (usec): min=21, max=53002, avg=1940.22, stdev=4515.42 00:25:30.501 clat (msec): min=3, max=529, avg=135.71, stdev=78.31 00:25:30.501 lat (msec): min=3, max=529, avg=137.65, stdev=79.39 00:25:30.501 clat percentiles (msec): 00:25:30.501 | 1.00th=[ 17], 5.00th=[ 51], 10.00th=[ 79], 20.00th=[ 85], 00:25:30.501 | 30.00th=[ 93], 40.00th=[ 103], 50.00th=[ 115], 60.00th=[ 129], 00:25:30.501 | 70.00th=[ 146], 80.00th=[ 178], 90.00th=[ 222], 95.00th=[ 296], 00:25:30.501 | 99.00th=[ 460], 99.50th=[ 498], 99.90th=[ 527], 99.95th=[ 531], 00:25:30.501 | 99.99th=[ 531] 00:25:30.501 bw ( KiB/s): min=36864, max=215040, per=8.03%, avg=119424.00, stdev=48834.31, samples=20 00:25:30.501 iops : min= 144, max= 840, avg=466.50, stdev=190.76, samples=20 00:25:30.501 lat (msec) : 4=0.06%, 10=0.34%, 20=0.99%, 50=3.64%, 100=33.25% 00:25:30.501 lat (msec) : 250=54.19%, 500=7.11%, 750=0.42% 00:25:30.501 cpu : usr=1.55%, sys=1.36%, ctx=1755, majf=0, minf=1 00:25:30.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.501 issued rwts: total=0,4728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.501 job4: (groupid=0, jobs=1): err= 0: pid=3595855: Thu Jul 25 18:58:41 2024 00:25:30.501 write: IOPS=583, BW=146MiB/s (153MB/s)(1484MiB/10176msec); 0 zone resets 00:25:30.501 slat (usec): min=16, max=145154, avg=1205.75, stdev=3698.02 00:25:30.501 clat (usec): min=1445, max=597743, avg=108454.11, stdev=69281.51 00:25:30.501 lat (usec): min=1669, max=597783, avg=109659.85, stdev=70102.58 00:25:30.501 clat percentiles (msec): 00:25:30.501 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 48], 00:25:30.501 | 30.00th=[ 71], 40.00th=[ 96], 50.00th=[ 111], 60.00th=[ 123], 00:25:30.501 | 70.00th=[ 131], 80.00th=[ 140], 90.00th=[ 167], 95.00th=[ 190], 00:25:30.501 | 99.00th=[ 414], 99.50th=[ 468], 99.90th=[ 584], 99.95th=[ 592], 00:25:30.501 | 99.99th=[ 600] 00:25:30.501 bw ( KiB/s): min=47104, max=355328, per=10.11%, avg=150371.65, stdev=64363.11, samples=20 00:25:30.501 iops : min= 184, max= 1388, avg=587.35, stdev=251.37, samples=20 00:25:30.501 lat (msec) : 2=0.03%, 4=0.45%, 10=1.30%, 20=2.32%, 50=18.41% 00:25:30.501 lat (msec) : 100=19.52%, 250=54.90%, 500=2.70%, 750=0.35% 00:25:30.501 cpu : usr=1.96%, sys=1.98%, ctx=3264, majf=0, minf=1 00:25:30.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.501 issued rwts: total=0,5936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.501 job5: (groupid=0, jobs=1): err= 0: pid=3595856: Thu Jul 25 18:58:41 2024 00:25:30.501 write: IOPS=539, BW=135MiB/s (141MB/s)(1359MiB/10076msec); 0 zone resets 00:25:30.501 slat (usec): min=24, max=52547, avg=1638.95, stdev=3590.79 00:25:30.501 clat (usec): min=1097, max=272844, avg=116765.65, stdev=52947.81 00:25:30.501 lat (usec): min=1137, max=272879, avg=118404.59, stdev=53649.99 00:25:30.501 clat percentiles (msec): 00:25:30.501 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 69], 00:25:30.501 | 30.00th=[ 88], 40.00th=[ 109], 50.00th=[ 120], 60.00th=[ 134], 00:25:30.501 | 70.00th=[ 146], 80.00th=[ 161], 90.00th=[ 182], 95.00th=[ 203], 00:25:30.501 | 99.00th=[ 251], 99.50th=[ 264], 99.90th=[ 271], 99.95th=[ 271], 00:25:30.501 | 99.99th=[ 275] 00:25:30.501 bw ( KiB/s): min=67719, max=363520, per=9.25%, avg=137555.55, stdev=65415.10, samples=20 00:25:30.501 iops : min= 264, max= 1420, avg=537.30, stdev=255.56, samples=20 00:25:30.501 lat (msec) : 2=0.07%, 4=0.22%, 10=1.01%, 20=1.60%, 50=14.28% 00:25:30.501 lat (msec) : 100=18.40%, 250=63.30%, 500=1.12% 00:25:30.501 cpu : usr=1.87%, sys=1.53%, ctx=2046, majf=0, minf=1 00:25:30.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:30.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.501 issued rwts: total=0,5436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.501 job6: (groupid=0, jobs=1): err= 0: pid=3595857: Thu Jul 25 18:58:41 2024 00:25:30.501 write: IOPS=464, BW=116MiB/s (122MB/s)(1177MiB/10131msec); 0 zone resets 00:25:30.502 slat (usec): min=16, max=246176, avg=1514.03, stdev=5543.45 00:25:30.502 clat (usec): min=1310, max=668077, avg=135921.31, stdev=79393.07 00:25:30.502 lat (usec): min=1353, max=668139, avg=137435.34, stdev=80428.78 00:25:30.502 clat percentiles (msec): 00:25:30.502 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 51], 20.00th=[ 87], 00:25:30.502 | 30.00th=[ 105], 40.00th=[ 113], 50.00th=[ 130], 60.00th=[ 144], 00:25:30.502 | 70.00th=[ 159], 80.00th=[ 176], 90.00th=[ 197], 95.00th=[ 259], 00:25:30.502 | 99.00th=[ 477], 99.50th=[ 550], 99.90th=[ 651], 99.95th=[ 667], 00:25:30.502 | 99.99th=[ 667] 00:25:30.502 bw ( KiB/s): min=34816, max=172544, per=7.99%, avg=118897.00, stdev=33091.14, samples=20 00:25:30.502 iops : min= 136, max= 674, avg=464.40, stdev=129.28, samples=20 00:25:30.502 lat (msec) : 2=0.08%, 4=0.36%, 10=1.04%, 20=2.74%, 50=5.78% 00:25:30.502 lat (msec) : 100=16.04%, 250=68.68%, 500=4.55%, 750=0.72% 00:25:30.502 cpu : usr=1.48%, sys=1.70%, ctx=2687, majf=0, minf=1 00:25:30.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:30.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.502 issued rwts: total=0,4707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.502 job7: (groupid=0, jobs=1): err= 0: pid=3595858: Thu Jul 25 18:58:41 2024 00:25:30.502 write: IOPS=534, BW=134MiB/s (140MB/s)(1355MiB/10131msec); 0 zone resets 00:25:30.502 slat (usec): min=19, max=77495, avg=1660.32, stdev=3846.16 00:25:30.502 clat (usec): min=953, max=299249, avg=117939.64, stdev=47028.27 00:25:30.502 lat (usec): min=996, max=299281, avg=119599.96, stdev=47565.77 00:25:30.502 clat percentiles (msec): 00:25:30.502 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 49], 20.00th=[ 86], 00:25:30.502 | 30.00th=[ 97], 40.00th=[ 112], 50.00th=[ 122], 60.00th=[ 132], 00:25:30.502 | 70.00th=[ 142], 80.00th=[ 155], 90.00th=[ 180], 95.00th=[ 188], 00:25:30.502 | 99.00th=[ 228], 99.50th=[ 236], 99.90th=[ 288], 99.95th=[ 288], 00:25:30.502 | 99.99th=[ 300] 00:25:30.502 bw ( KiB/s): min=91648, max=210432, per=9.22%, avg=137102.20, stdev=33260.36, samples=20 00:25:30.502 iops : min= 358, max= 822, avg=535.55, stdev=129.92, samples=20 00:25:30.502 lat (usec) : 1000=0.02% 00:25:30.502 lat (msec) : 2=0.09%, 4=0.24%, 10=1.77%, 20=2.09%, 50=6.59% 00:25:30.502 lat (msec) : 100=20.58%, 250=68.27%, 500=0.35% 00:25:30.502 cpu : usr=1.63%, sys=1.86%, ctx=2024, majf=0, minf=1 00:25:30.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:30.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.502 issued rwts: total=0,5418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.502 job8: (groupid=0, jobs=1): err= 0: pid=3595859: Thu Jul 25 18:58:41 2024 00:25:30.502 write: IOPS=441, BW=110MiB/s (116MB/s)(1118MiB/10132msec); 0 zone resets 00:25:30.502 slat (usec): min=24, max=64031, avg=1553.06, stdev=4328.34 00:25:30.502 clat (usec): min=899, max=524813, avg=143412.01, stdev=77823.09 00:25:30.502 lat (usec): min=970, max=534183, avg=144965.07, stdev=78987.91 00:25:30.502 clat percentiles (msec): 00:25:30.502 | 1.00th=[ 10], 5.00th=[ 31], 10.00th=[ 57], 20.00th=[ 96], 00:25:30.502 | 30.00th=[ 110], 40.00th=[ 122], 50.00th=[ 138], 60.00th=[ 153], 00:25:30.502 | 70.00th=[ 167], 80.00th=[ 180], 90.00th=[ 211], 95.00th=[ 279], 00:25:30.502 | 99.00th=[ 468], 99.50th=[ 506], 99.90th=[ 523], 99.95th=[ 523], 00:25:30.502 | 99.99th=[ 527] 00:25:30.502 bw ( KiB/s): min=38912, max=165888, per=7.58%, avg=112844.80, stdev=31323.27, samples=20 00:25:30.502 iops : min= 152, max= 648, avg=440.80, stdev=122.36, samples=20 00:25:30.502 lat (usec) : 1000=0.07% 00:25:30.502 lat (msec) : 2=0.20%, 4=0.22%, 10=0.56%, 20=1.79%, 50=5.70% 00:25:30.502 lat (msec) : 100=13.71%, 250=71.77%, 500=5.37%, 750=0.60% 00:25:30.502 cpu : usr=1.47%, sys=1.73%, ctx=2526, majf=0, minf=1 00:25:30.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:30.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.502 issued rwts: total=0,4471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.502 job9: (groupid=0, jobs=1): err= 0: pid=3595860: Thu Jul 25 18:58:41 2024 00:25:30.502 write: IOPS=514, BW=129MiB/s (135MB/s)(1304MiB/10132msec); 0 zone resets 00:25:30.502 slat (usec): min=25, max=169175, avg=1657.17, stdev=4850.39 00:25:30.502 clat (msec): min=3, max=633, avg=122.58, stdev=77.11 00:25:30.502 lat (msec): min=3, max=633, avg=124.23, stdev=78.10 00:25:30.502 clat percentiles (msec): 00:25:30.502 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 73], 00:25:30.502 | 30.00th=[ 91], 40.00th=[ 105], 50.00th=[ 118], 60.00th=[ 129], 00:25:30.502 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 171], 95.00th=[ 184], 00:25:30.502 | 99.00th=[ 558], 99.50th=[ 600], 99.90th=[ 634], 99.95th=[ 634], 00:25:30.502 | 99.99th=[ 634] 00:25:30.502 bw ( KiB/s): min=36864, max=221184, per=8.87%, avg=131961.60, stdev=46692.53, samples=20 00:25:30.502 iops : min= 144, max= 864, avg=515.45, stdev=182.36, samples=20 00:25:30.502 lat (msec) : 4=0.04%, 10=0.17%, 20=0.81%, 50=11.18%, 100=24.67% 00:25:30.502 lat (msec) : 250=59.36%, 500=2.59%, 750=1.19% 00:25:30.502 cpu : usr=1.57%, sys=1.94%, ctx=2141, majf=0, minf=1 00:25:30.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:30.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.502 issued rwts: total=0,5217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.502 job10: (groupid=0, jobs=1): err= 0: pid=3595861: Thu Jul 25 18:58:41 2024 00:25:30.502 write: IOPS=554, BW=139MiB/s (145MB/s)(1405MiB/10140msec); 0 zone resets 00:25:30.502 slat (usec): min=15, max=48378, avg=1124.83, stdev=3162.80 00:25:30.502 clat (usec): min=1077, max=346694, avg=114296.31, stdev=69353.22 00:25:30.502 lat (usec): min=1111, max=346736, avg=115421.14, stdev=70070.57 00:25:30.502 clat percentiles (msec): 00:25:30.502 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 16], 20.00th=[ 49], 00:25:30.502 | 30.00th=[ 80], 40.00th=[ 99], 50.00th=[ 118], 60.00th=[ 132], 00:25:30.502 | 70.00th=[ 146], 80.00th=[ 163], 90.00th=[ 188], 95.00th=[ 247], 00:25:30.502 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 338], 99.95th=[ 342], 00:25:30.502 | 99.99th=[ 347] 00:25:30.502 bw ( KiB/s): min=79872, max=231424, per=9.56%, avg=142208.00, stdev=39422.38, samples=20 00:25:30.502 iops : min= 312, max= 904, avg=555.50, stdev=153.99, samples=20 00:25:30.502 lat (msec) : 2=0.34%, 4=1.60%, 10=4.45%, 20=5.07%, 50=9.17% 00:25:30.502 lat (msec) : 100=20.17%, 250=54.56%, 500=4.65% 00:25:30.502 cpu : usr=1.67%, sys=1.83%, ctx=3489, majf=0, minf=1 00:25:30.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:30.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.502 issued rwts: total=0,5618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.502 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.502 00:25:30.502 Run status group 0 (all jobs): 00:25:30.502 WRITE: bw=1453MiB/s (1523MB/s), 103MiB/s-183MiB/s (108MB/s-192MB/s), io=14.4GiB (15.5GB), run=10074-10176msec 00:25:30.502 00:25:30.502 Disk stats (read/write): 00:25:30.502 nvme0n1: ios=47/11604, merge=0/0, ticks=3323/1201196, in_queue=1204519, util=99.81% 00:25:30.502 nvme10n1: ios=48/14831, merge=0/0, ticks=75/1241237, in_queue=1241312, util=97.50% 00:25:30.502 nvme1n1: ios=0/8322, merge=0/0, ticks=0/1234514, in_queue=1234514, util=97.36% 00:25:30.502 nvme2n1: ios=0/9413, merge=0/0, ticks=0/1230690, in_queue=1230690, util=97.53% 00:25:30.502 nvme3n1: ios=0/11808, merge=0/0, ticks=0/1237928, in_queue=1237928, util=97.55% 00:25:30.502 nvme4n1: ios=45/10571, merge=0/0, ticks=2063/1204093, in_queue=1206156, util=99.98% 00:25:30.502 nvme5n1: ios=40/9404, merge=0/0, ticks=977/1237455, in_queue=1238432, util=100.00% 00:25:30.502 nvme6n1: ios=39/10826, merge=0/0, ticks=1369/1229072, in_queue=1230441, util=100.00% 00:25:30.502 nvme7n1: ios=0/8933, merge=0/0, ticks=0/1244838, in_queue=1244838, util=98.67% 00:25:30.502 nvme8n1: ios=0/10423, merge=0/0, ticks=0/1236701, in_queue=1236701, util=98.88% 00:25:30.502 nvme9n1: ios=41/11226, merge=0/0, ticks=1488/1245435, in_queue=1246923, util=100.00% 00:25:30.502 18:58:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:30.502 18:58:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:30.502 18:58:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.502 18:58:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:30.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:30.502 18:58:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:30.502 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:30.502 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:30.502 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.503 18:58:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:30.503 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:30.503 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:30.503 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:30.761 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:30.761 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:30.761 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.761 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.761 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.761 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.761 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:31.019 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.019 18:58:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:31.276 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:31.276 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:31.276 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:31.276 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:31.276 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.277 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:31.534 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.534 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:31.792 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.792 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:32.050 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:32.050 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.050 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.308 18:58:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.308 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.308 18:58:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:32.308 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.308 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.309 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:32.309 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:32.309 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:32.309 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:32.309 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:32.309 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.568 rmmod nvme_tcp 00:25:32.568 rmmod nvme_fabrics 00:25:32.568 rmmod nvme_keyring 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3590397 ']' 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3590397 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 3590397 ']' 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 3590397 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3590397 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3590397' 00:25:32.568 killing process with pid 3590397 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 3590397 00:25:32.568 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 3590397 00:25:33.137 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:33.137 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:33.137 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:33.137 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:33.137 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:33.137 18:58:44 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.137 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.137 18:58:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.044 18:58:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:35.044 00:25:35.044 real 1m0.943s 00:25:35.044 user 3m25.447s 00:25:35.044 sys 0m24.442s 00:25:35.044 18:58:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:35.044 18:58:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.044 ************************************ 00:25:35.044 END TEST nvmf_multiconnection 00:25:35.044 ************************************ 00:25:35.044 18:58:46 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:35.044 18:58:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:35.044 18:58:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:35.044 18:58:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:35.302 ************************************ 00:25:35.302 START TEST nvmf_initiator_timeout 00:25:35.302 ************************************ 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:35.302 * Looking for test storage... 00:25:35.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.302 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:35.302 18:58:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:37.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:37.208 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:37.208 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:37.208 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:37.208 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:37.209 18:58:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:37.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:25:37.209 00:25:37.209 --- 10.0.0.2 ping statistics --- 00:25:37.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.209 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:25:37.209 00:25:37.209 --- 10.0.0.1 ping statistics --- 00:25:37.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.209 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3599195 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3599195 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 3599195 ']' 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:37.209 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.468 [2024-07-25 18:58:49.096523] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:37.468 [2024-07-25 18:58:49.096590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.468 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.468 [2024-07-25 18:58:49.159418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.468 [2024-07-25 18:58:49.244054] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.468 [2024-07-25 18:58:49.244138] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.468 [2024-07-25 18:58:49.244153] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.468 [2024-07-25 18:58:49.244164] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.468 [2024-07-25 18:58:49.244173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.468 [2024-07-25 18:58:49.244241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.468 [2024-07-25 18:58:49.244304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.468 [2024-07-25 18:58:49.244718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.468 [2024-07-25 18:58:49.244723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.729 Malloc0 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.729 Delay0 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.729 [2024-07-25 18:58:49.437144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.729 [2024-07-25 18:58:49.465430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.729 18:58:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:38.296 18:58:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:38.296 18:58:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:38.296 18:58:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.296 18:58:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:38.296 18:58:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3599617 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:40.827 18:58:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:40.827 [global] 00:25:40.827 thread=1 00:25:40.827 invalidate=1 00:25:40.827 rw=write 00:25:40.827 time_based=1 00:25:40.827 runtime=60 00:25:40.827 ioengine=libaio 00:25:40.827 direct=1 00:25:40.827 bs=4096 00:25:40.827 iodepth=1 00:25:40.827 norandommap=0 00:25:40.827 numjobs=1 00:25:40.827 00:25:40.827 verify_dump=1 00:25:40.827 verify_backlog=512 00:25:40.827 verify_state_save=0 00:25:40.827 do_verify=1 00:25:40.827 verify=crc32c-intel 00:25:40.827 [job0] 00:25:40.827 filename=/dev/nvme0n1 00:25:40.827 Could not set queue depth (nvme0n1) 00:25:40.827 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:40.827 fio-3.35 00:25:40.827 Starting 1 thread 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.357 true 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.357 true 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.357 true 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.357 true 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.357 18:58:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.693 true 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.693 true 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.693 true 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.693 true 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:46.693 18:58:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3599617 00:26:42.933 00:26:42.933 job0: (groupid=0, jobs=1): err= 0: pid=3599688: Thu Jul 25 18:59:52 2024 00:26:42.933 read: IOPS=37, BW=149KiB/s (153kB/s)(8944KiB/60017msec) 00:26:42.933 slat (nsec): min=4775, max=61229, avg=16321.66, stdev=10032.75 00:26:42.933 clat (usec): min=248, max=41096k, avg=26540.84, stdev=869069.15 00:26:42.933 lat (usec): min=254, max=41096k, avg=26557.16, stdev=869069.24 00:26:42.933 clat percentiles (usec): 00:26:42.933 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 00:26:42.933 | 20.00th=[ 281], 30.00th=[ 297], 40.00th=[ 314], 00:26:42.933 | 50.00th=[ 355], 60.00th=[ 383], 70.00th=[ 388], 00:26:42.933 | 80.00th=[ 498], 90.00th=[ 41681], 95.00th=[ 42206], 00:26:42.933 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 43254], 00:26:42.933 | 99.95th=[ 43779], 99.99th=[17112761] 00:26:42.933 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60017msec); 0 zone resets 00:26:42.933 slat (usec): min=5, max=40794, avg=37.39, stdev=958.19 00:26:42.933 clat (usec): min=172, max=1032, avg=201.93, stdev=24.56 00:26:42.933 lat (usec): min=179, max=41013, avg=239.33, stdev=959.33 00:26:42.933 clat percentiles (usec): 00:26:42.933 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:26:42.933 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:26:42.933 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 227], 00:26:42.933 | 99.00th=[ 273], 99.50th=[ 310], 99.90th=[ 383], 99.95th=[ 383], 00:26:42.933 | 99.99th=[ 1037] 00:26:42.933 bw ( KiB/s): min= 1392, max= 8192, per=100.00%, avg=5120.00, stdev=3011.43, samples=4 00:26:42.933 iops : min= 348, max= 2048, avg=1280.00, stdev=752.86, samples=4 00:26:42.933 lat (usec) : 250=52.44%, 500=38.24%, 750=0.40% 00:26:42.933 lat (msec) : 2=0.02%, 4=0.02%, 50=8.86%, >=2000=0.02% 00:26:42.933 cpu : usr=0.06%, sys=0.13%, ctx=4801, majf=0, minf=2 00:26:42.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:42.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.933 issued rwts: total=2236,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:42.933 00:26:42.933 Run status group 0 (all jobs): 00:26:42.933 READ: bw=149KiB/s (153kB/s), 149KiB/s-149KiB/s (153kB/s-153kB/s), io=8944KiB (9159kB), run=60017-60017msec 00:26:42.933 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60017-60017msec 00:26:42.933 00:26:42.933 Disk stats (read/write): 00:26:42.933 nvme0n1: ios=2285/2560, merge=0/0, ticks=19334/491, in_queue=19825, util=99.80% 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:42.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:42.933 nvmf hotplug test: fio successful as expected 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.933 rmmod nvme_tcp 00:26:42.933 rmmod nvme_fabrics 00:26:42.933 rmmod nvme_keyring 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3599195 ']' 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3599195 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 3599195 ']' 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 3599195 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:42.933 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3599195 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3599195' 00:26:42.934 killing process with pid 3599195 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 3599195 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 3599195 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:42.934 18:59:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:42.934 18:59:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.934 18:59:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:42.934 18:59:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.934 18:59:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.934 18:59:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.192 18:59:55 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:43.192 00:26:43.192 real 1m8.105s 00:26:43.192 user 4m10.892s 00:26:43.192 sys 0m6.331s 00:26:43.192 18:59:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:43.192 18:59:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.192 ************************************ 00:26:43.192 END TEST nvmf_initiator_timeout 00:26:43.192 ************************************ 00:26:43.192 18:59:55 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:43.192 18:59:55 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:43.192 18:59:55 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:43.192 18:59:55 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:43.192 18:59:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:45.096 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:45.096 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:45.096 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.096 18:59:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:45.355 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:45.355 18:59:56 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:45.355 18:59:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:45.355 18:59:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:45.355 18:59:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:45.355 ************************************ 00:26:45.355 START TEST nvmf_perf_adq 00:26:45.355 ************************************ 00:26:45.355 18:59:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:45.355 * Looking for test storage... 00:26:45.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.355 18:59:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:47.258 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:47.258 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:47.258 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:47.258 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:47.258 18:59:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:47.826 18:59:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:49.725 19:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:54.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:54.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:54.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:54.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.998 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:54.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:26:54.998 00:26:54.998 --- 10.0.0.2 ping statistics --- 00:26:54.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.999 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:26:54.999 00:26:54.999 --- 10.0.0.1 ping statistics --- 00:26:54.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.999 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3611292 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3611292 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3611292 ']' 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:54.999 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:54.999 [2024-07-25 19:00:06.694506] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:26:54.999 [2024-07-25 19:00:06.694588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.999 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.999 [2024-07-25 19:00:06.764980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.999 [2024-07-25 19:00:06.859440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.999 [2024-07-25 19:00:06.859505] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.999 [2024-07-25 19:00:06.859522] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.999 [2024-07-25 19:00:06.859535] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.999 [2024-07-25 19:00:06.859548] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.999 [2024-07-25 19:00:06.859637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.999 [2024-07-25 19:00:06.859692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.999 [2024-07-25 19:00:06.859744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.999 [2024-07-25 19:00:06.859746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.257 19:00:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.257 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.257 19:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:55.257 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.257 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.257 [2024-07-25 19:00:07.067679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.257 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.257 19:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.258 Malloc1 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:55.258 [2024-07-25 19:00:07.118450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3611335 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:55.258 19:00:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:55.516 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.436 19:00:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:57.436 19:00:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.436 19:00:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.436 19:00:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.436 19:00:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:57.436 "tick_rate": 2700000000, 00:26:57.436 "poll_groups": [ 00:26:57.436 { 00:26:57.436 "name": "nvmf_tgt_poll_group_000", 00:26:57.436 "admin_qpairs": 1, 00:26:57.436 "io_qpairs": 1, 00:26:57.436 "current_admin_qpairs": 1, 00:26:57.436 "current_io_qpairs": 1, 00:26:57.436 "pending_bdev_io": 0, 00:26:57.436 "completed_nvme_io": 19969, 00:26:57.436 "transports": [ 00:26:57.436 { 00:26:57.436 "trtype": "TCP" 00:26:57.436 } 00:26:57.436 ] 00:26:57.436 }, 00:26:57.436 { 00:26:57.436 "name": "nvmf_tgt_poll_group_001", 00:26:57.436 "admin_qpairs": 0, 00:26:57.436 "io_qpairs": 1, 00:26:57.436 "current_admin_qpairs": 0, 00:26:57.436 "current_io_qpairs": 1, 00:26:57.436 "pending_bdev_io": 0, 00:26:57.436 "completed_nvme_io": 20835, 00:26:57.436 "transports": [ 00:26:57.436 { 00:26:57.436 "trtype": "TCP" 00:26:57.436 } 00:26:57.436 ] 00:26:57.436 }, 00:26:57.436 { 00:26:57.436 "name": "nvmf_tgt_poll_group_002", 00:26:57.436 "admin_qpairs": 0, 00:26:57.436 "io_qpairs": 1, 00:26:57.436 "current_admin_qpairs": 0, 00:26:57.437 "current_io_qpairs": 1, 00:26:57.437 "pending_bdev_io": 0, 00:26:57.437 "completed_nvme_io": 20950, 00:26:57.437 "transports": [ 00:26:57.437 { 00:26:57.437 "trtype": "TCP" 00:26:57.437 } 00:26:57.437 ] 00:26:57.437 }, 00:26:57.437 { 00:26:57.437 "name": "nvmf_tgt_poll_group_003", 00:26:57.437 "admin_qpairs": 0, 00:26:57.437 "io_qpairs": 1, 00:26:57.437 "current_admin_qpairs": 0, 00:26:57.437 "current_io_qpairs": 1, 00:26:57.437 "pending_bdev_io": 0, 00:26:57.437 "completed_nvme_io": 19743, 00:26:57.437 "transports": [ 00:26:57.437 { 00:26:57.437 "trtype": "TCP" 00:26:57.437 } 00:26:57.437 ] 00:26:57.437 } 00:26:57.437 ] 00:26:57.437 }' 00:26:57.437 19:00:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:57.437 19:00:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:57.437 19:00:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:57.437 19:00:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:57.437 19:00:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3611335 00:27:05.598 Initializing NVMe Controllers 00:27:05.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:05.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:05.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:05.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:05.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:05.598 Initialization complete. Launching workers. 00:27:05.598 ======================================================== 00:27:05.598 Latency(us) 00:27:05.598 Device Information : IOPS MiB/s Average min max 00:27:05.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10427.87 40.73 6136.89 2367.51 9342.24 00:27:05.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10966.25 42.84 5837.25 4140.81 7536.21 00:27:05.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10972.95 42.86 5834.46 2935.31 8412.47 00:27:05.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10575.86 41.31 6053.45 3415.81 8877.15 00:27:05.598 ======================================================== 00:27:05.598 Total : 42942.93 167.75 5962.54 2367.51 9342.24 00:27:05.598 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.598 rmmod nvme_tcp 00:27:05.598 rmmod nvme_fabrics 00:27:05.598 rmmod nvme_keyring 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3611292 ']' 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3611292 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3611292 ']' 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3611292 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3611292 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3611292' 00:27:05.598 killing process with pid 3611292 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3611292 00:27:05.598 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3611292 00:27:05.856 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.856 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.856 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.856 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.856 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.856 19:00:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.856 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.856 19:00:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.386 19:00:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:08.386 19:00:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:08.386 19:00:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:08.644 19:00:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:10.544 19:00:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:15.804 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:15.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:15.805 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:15.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:15.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:15.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:27:15.805 00:27:15.805 --- 10.0.0.2 ping statistics --- 00:27:15.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.805 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:27:15.805 00:27:15.805 --- 10.0.0.1 ping statistics --- 00:27:15.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.805 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:15.805 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:15.806 net.core.busy_poll = 1 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:15.806 net.core.busy_read = 1 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3614446 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3614446 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3614446 ']' 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:15.806 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.806 [2024-07-25 19:00:27.597777] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:15.806 [2024-07-25 19:00:27.597870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.806 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.806 [2024-07-25 19:00:27.663170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.065 [2024-07-25 19:00:27.751964] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.065 [2024-07-25 19:00:27.752023] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.065 [2024-07-25 19:00:27.752053] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.065 [2024-07-25 19:00:27.752072] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.065 [2024-07-25 19:00:27.752084] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.065 [2024-07-25 19:00:27.752142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.065 [2024-07-25 19:00:27.752202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.065 [2024-07-25 19:00:27.752271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.065 [2024-07-25 19:00:27.752273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.065 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.323 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.323 19:00:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:16.323 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.323 19:00:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.323 [2024-07-25 19:00:27.993788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.323 Malloc1 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.323 [2024-07-25 19:00:28.045704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3614543 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:16.323 19:00:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:16.323 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.219 19:00:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:18.219 19:00:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.219 19:00:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.219 19:00:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.219 19:00:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:18.219 "tick_rate": 2700000000, 00:27:18.219 "poll_groups": [ 00:27:18.219 { 00:27:18.219 "name": "nvmf_tgt_poll_group_000", 00:27:18.219 "admin_qpairs": 1, 00:27:18.219 "io_qpairs": 2, 00:27:18.219 "current_admin_qpairs": 1, 00:27:18.219 "current_io_qpairs": 2, 00:27:18.219 "pending_bdev_io": 0, 00:27:18.219 "completed_nvme_io": 27242, 00:27:18.219 "transports": [ 00:27:18.219 { 00:27:18.219 "trtype": "TCP" 00:27:18.219 } 00:27:18.219 ] 00:27:18.219 }, 00:27:18.219 { 00:27:18.219 "name": "nvmf_tgt_poll_group_001", 00:27:18.219 "admin_qpairs": 0, 00:27:18.219 "io_qpairs": 2, 00:27:18.219 "current_admin_qpairs": 0, 00:27:18.219 "current_io_qpairs": 2, 00:27:18.219 "pending_bdev_io": 0, 00:27:18.219 "completed_nvme_io": 24719, 00:27:18.219 "transports": [ 00:27:18.219 { 00:27:18.219 "trtype": "TCP" 00:27:18.219 } 00:27:18.219 ] 00:27:18.219 }, 00:27:18.219 { 00:27:18.219 "name": "nvmf_tgt_poll_group_002", 00:27:18.219 "admin_qpairs": 0, 00:27:18.219 "io_qpairs": 0, 00:27:18.219 "current_admin_qpairs": 0, 00:27:18.219 "current_io_qpairs": 0, 00:27:18.219 "pending_bdev_io": 0, 00:27:18.219 "completed_nvme_io": 0, 00:27:18.219 "transports": [ 00:27:18.219 { 00:27:18.219 "trtype": "TCP" 00:27:18.219 } 00:27:18.219 ] 00:27:18.219 }, 00:27:18.219 { 00:27:18.219 "name": "nvmf_tgt_poll_group_003", 00:27:18.219 "admin_qpairs": 0, 00:27:18.219 "io_qpairs": 0, 00:27:18.219 "current_admin_qpairs": 0, 00:27:18.219 "current_io_qpairs": 0, 00:27:18.219 "pending_bdev_io": 0, 00:27:18.219 "completed_nvme_io": 0, 00:27:18.219 "transports": [ 00:27:18.219 { 00:27:18.219 "trtype": "TCP" 00:27:18.219 } 00:27:18.219 ] 00:27:18.219 } 00:27:18.219 ] 00:27:18.219 }' 00:27:18.219 19:00:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:18.219 19:00:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:18.477 19:00:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:18.477 19:00:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:18.477 19:00:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3614543 00:27:26.578 Initializing NVMe Controllers 00:27:26.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:26.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:26.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:26.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:26.578 Initialization complete. Launching workers. 00:27:26.578 ======================================================== 00:27:26.578 Latency(us) 00:27:26.578 Device Information : IOPS MiB/s Average min max 00:27:26.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5400.71 21.10 11900.30 1817.02 53755.43 00:27:26.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7260.68 28.36 8815.09 1611.26 53785.32 00:27:26.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7708.17 30.11 8304.14 1716.59 53839.04 00:27:26.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7024.18 27.44 9138.92 1625.82 53476.52 00:27:26.578 ======================================================== 00:27:26.578 Total : 27393.74 107.01 9362.60 1611.26 53839.04 00:27:26.578 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.578 rmmod nvme_tcp 00:27:26.578 rmmod nvme_fabrics 00:27:26.578 rmmod nvme_keyring 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3614446 ']' 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3614446 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3614446 ']' 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3614446 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3614446 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3614446' 00:27:26.578 killing process with pid 3614446 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3614446 00:27:26.578 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3614446 00:27:26.837 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.837 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.837 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.837 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.837 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.837 19:00:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.837 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.837 19:00:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.127 19:00:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:30.127 19:00:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:30.127 00:27:30.127 real 0m44.586s 00:27:30.127 user 2m38.403s 00:27:30.127 sys 0m9.591s 00:27:30.127 19:00:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:30.127 19:00:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.127 ************************************ 00:27:30.127 END TEST nvmf_perf_adq 00:27:30.127 ************************************ 00:27:30.127 19:00:41 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:30.127 19:00:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:30.127 19:00:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:30.127 19:00:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:30.127 ************************************ 00:27:30.127 START TEST nvmf_shutdown 00:27:30.127 ************************************ 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:30.127 * Looking for test storage... 00:27:30.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.127 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:30.128 ************************************ 00:27:30.128 START TEST nvmf_shutdown_tc1 00:27:30.128 ************************************ 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:30.128 19:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:32.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:32.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.031 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:32.032 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:32.032 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:32.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:27:32.032 00:27:32.032 --- 10.0.0.2 ping statistics --- 00:27:32.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.032 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:27:32.032 00:27:32.032 --- 10.0.0.1 ping statistics --- 00:27:32.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.032 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3617760 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3617760 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3617760 ']' 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.032 19:00:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.032 [2024-07-25 19:00:43.765703] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:32.032 [2024-07-25 19:00:43.765783] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.032 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.032 [2024-07-25 19:00:43.834379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.326 [2024-07-25 19:00:43.934837] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.326 [2024-07-25 19:00:43.934888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.326 [2024-07-25 19:00:43.934905] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.326 [2024-07-25 19:00:43.934918] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.326 [2024-07-25 19:00:43.934930] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.326 [2024-07-25 19:00:43.935046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.326 [2024-07-25 19:00:43.935154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.326 [2024-07-25 19:00:43.935179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:32.326 [2024-07-25 19:00:43.935182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.326 [2024-07-25 19:00:44.085614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.326 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.326 Malloc1 00:27:32.326 [2024-07-25 19:00:44.160551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.583 Malloc2 00:27:32.583 Malloc3 00:27:32.583 Malloc4 00:27:32.583 Malloc5 00:27:32.583 Malloc6 00:27:32.583 Malloc7 00:27:32.842 Malloc8 00:27:32.842 Malloc9 00:27:32.842 Malloc10 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3617942 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3617942 /var/tmp/bdevperf.sock 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3617942 ']' 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:32.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.842 { 00:27:32.842 "params": { 00:27:32.842 "name": "Nvme$subsystem", 00:27:32.842 "trtype": "$TEST_TRANSPORT", 00:27:32.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.842 "adrfam": "ipv4", 00:27:32.842 "trsvcid": "$NVMF_PORT", 00:27:32.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.842 "hdgst": ${hdgst:-false}, 00:27:32.842 "ddgst": ${ddgst:-false} 00:27:32.842 }, 00:27:32.842 "method": "bdev_nvme_attach_controller" 00:27:32.842 } 00:27:32.842 EOF 00:27:32.842 )") 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.842 { 00:27:32.842 "params": { 00:27:32.842 "name": "Nvme$subsystem", 00:27:32.842 "trtype": "$TEST_TRANSPORT", 00:27:32.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.842 "adrfam": "ipv4", 00:27:32.842 "trsvcid": "$NVMF_PORT", 00:27:32.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.842 "hdgst": ${hdgst:-false}, 00:27:32.842 "ddgst": ${ddgst:-false} 00:27:32.842 }, 00:27:32.842 "method": "bdev_nvme_attach_controller" 00:27:32.842 } 00:27:32.842 EOF 00:27:32.842 )") 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.842 { 00:27:32.842 "params": { 00:27:32.842 "name": "Nvme$subsystem", 00:27:32.842 "trtype": "$TEST_TRANSPORT", 00:27:32.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.842 "adrfam": "ipv4", 00:27:32.842 "trsvcid": "$NVMF_PORT", 00:27:32.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.842 "hdgst": ${hdgst:-false}, 00:27:32.842 "ddgst": ${ddgst:-false} 00:27:32.842 }, 00:27:32.842 "method": "bdev_nvme_attach_controller" 00:27:32.842 } 00:27:32.842 EOF 00:27:32.842 )") 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.842 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.842 { 00:27:32.842 "params": { 00:27:32.842 "name": "Nvme$subsystem", 00:27:32.842 "trtype": "$TEST_TRANSPORT", 00:27:32.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.842 "adrfam": "ipv4", 00:27:32.842 "trsvcid": "$NVMF_PORT", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.843 "hdgst": ${hdgst:-false}, 00:27:32.843 "ddgst": ${ddgst:-false} 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 } 00:27:32.843 EOF 00:27:32.843 )") 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.843 { 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme$subsystem", 00:27:32.843 "trtype": "$TEST_TRANSPORT", 00:27:32.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "$NVMF_PORT", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.843 "hdgst": ${hdgst:-false}, 00:27:32.843 "ddgst": ${ddgst:-false} 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 } 00:27:32.843 EOF 00:27:32.843 )") 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.843 { 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme$subsystem", 00:27:32.843 "trtype": "$TEST_TRANSPORT", 00:27:32.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "$NVMF_PORT", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.843 "hdgst": ${hdgst:-false}, 00:27:32.843 "ddgst": ${ddgst:-false} 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 } 00:27:32.843 EOF 00:27:32.843 )") 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.843 { 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme$subsystem", 00:27:32.843 "trtype": "$TEST_TRANSPORT", 00:27:32.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "$NVMF_PORT", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.843 "hdgst": ${hdgst:-false}, 00:27:32.843 "ddgst": ${ddgst:-false} 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 } 00:27:32.843 EOF 00:27:32.843 )") 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.843 { 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme$subsystem", 00:27:32.843 "trtype": "$TEST_TRANSPORT", 00:27:32.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "$NVMF_PORT", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.843 "hdgst": ${hdgst:-false}, 00:27:32.843 "ddgst": ${ddgst:-false} 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 } 00:27:32.843 EOF 00:27:32.843 )") 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.843 { 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme$subsystem", 00:27:32.843 "trtype": "$TEST_TRANSPORT", 00:27:32.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "$NVMF_PORT", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.843 "hdgst": ${hdgst:-false}, 00:27:32.843 "ddgst": ${ddgst:-false} 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 } 00:27:32.843 EOF 00:27:32.843 )") 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.843 { 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme$subsystem", 00:27:32.843 "trtype": "$TEST_TRANSPORT", 00:27:32.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "$NVMF_PORT", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.843 "hdgst": ${hdgst:-false}, 00:27:32.843 "ddgst": ${ddgst:-false} 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 } 00:27:32.843 EOF 00:27:32.843 )") 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:32.843 19:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme1", 00:27:32.843 "trtype": "tcp", 00:27:32.843 "traddr": "10.0.0.2", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "4420", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:32.843 "hdgst": false, 00:27:32.843 "ddgst": false 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 },{ 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme2", 00:27:32.843 "trtype": "tcp", 00:27:32.843 "traddr": "10.0.0.2", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "4420", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:32.843 "hdgst": false, 00:27:32.843 "ddgst": false 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 },{ 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme3", 00:27:32.843 "trtype": "tcp", 00:27:32.843 "traddr": "10.0.0.2", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "4420", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:32.843 "hdgst": false, 00:27:32.843 "ddgst": false 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 },{ 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme4", 00:27:32.843 "trtype": "tcp", 00:27:32.843 "traddr": "10.0.0.2", 00:27:32.843 "adrfam": "ipv4", 00:27:32.843 "trsvcid": "4420", 00:27:32.843 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:32.843 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:32.843 "hdgst": false, 00:27:32.843 "ddgst": false 00:27:32.843 }, 00:27:32.843 "method": "bdev_nvme_attach_controller" 00:27:32.843 },{ 00:27:32.843 "params": { 00:27:32.843 "name": "Nvme5", 00:27:32.843 "trtype": "tcp", 00:27:32.843 "traddr": "10.0.0.2", 00:27:32.844 "adrfam": "ipv4", 00:27:32.844 "trsvcid": "4420", 00:27:32.844 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:32.844 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:32.844 "hdgst": false, 00:27:32.844 "ddgst": false 00:27:32.844 }, 00:27:32.844 "method": "bdev_nvme_attach_controller" 00:27:32.844 },{ 00:27:32.844 "params": { 00:27:32.844 "name": "Nvme6", 00:27:32.844 "trtype": "tcp", 00:27:32.844 "traddr": "10.0.0.2", 00:27:32.844 "adrfam": "ipv4", 00:27:32.844 "trsvcid": "4420", 00:27:32.844 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:32.844 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:32.844 "hdgst": false, 00:27:32.844 "ddgst": false 00:27:32.844 }, 00:27:32.844 "method": "bdev_nvme_attach_controller" 00:27:32.844 },{ 00:27:32.844 "params": { 00:27:32.844 "name": "Nvme7", 00:27:32.844 "trtype": "tcp", 00:27:32.844 "traddr": "10.0.0.2", 00:27:32.844 "adrfam": "ipv4", 00:27:32.844 "trsvcid": "4420", 00:27:32.844 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:32.844 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:32.844 "hdgst": false, 00:27:32.844 "ddgst": false 00:27:32.844 }, 00:27:32.844 "method": "bdev_nvme_attach_controller" 00:27:32.844 },{ 00:27:32.844 "params": { 00:27:32.844 "name": "Nvme8", 00:27:32.844 "trtype": "tcp", 00:27:32.844 "traddr": "10.0.0.2", 00:27:32.844 "adrfam": "ipv4", 00:27:32.844 "trsvcid": "4420", 00:27:32.844 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:32.844 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:32.844 "hdgst": false, 00:27:32.844 "ddgst": false 00:27:32.844 }, 00:27:32.844 "method": "bdev_nvme_attach_controller" 00:27:32.844 },{ 00:27:32.844 "params": { 00:27:32.844 "name": "Nvme9", 00:27:32.844 "trtype": "tcp", 00:27:32.844 "traddr": "10.0.0.2", 00:27:32.844 "adrfam": "ipv4", 00:27:32.844 "trsvcid": "4420", 00:27:32.844 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:32.844 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:32.844 "hdgst": false, 00:27:32.844 "ddgst": false 00:27:32.844 }, 00:27:32.844 "method": "bdev_nvme_attach_controller" 00:27:32.844 },{ 00:27:32.844 "params": { 00:27:32.844 "name": "Nvme10", 00:27:32.844 "trtype": "tcp", 00:27:32.844 "traddr": "10.0.0.2", 00:27:32.844 "adrfam": "ipv4", 00:27:32.844 "trsvcid": "4420", 00:27:32.844 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:32.844 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:32.844 "hdgst": false, 00:27:32.844 "ddgst": false 00:27:32.844 }, 00:27:32.844 "method": "bdev_nvme_attach_controller" 00:27:32.844 }' 00:27:32.844 [2024-07-25 19:00:44.653601] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:32.844 [2024-07-25 19:00:44.653675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:32.844 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.844 [2024-07-25 19:00:44.716779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.102 [2024-07-25 19:00:44.803815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3617942 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:35.006 19:00:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:35.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3617942 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3617760 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.944 { 00:27:35.944 "params": { 00:27:35.944 "name": "Nvme$subsystem", 00:27:35.944 "trtype": "$TEST_TRANSPORT", 00:27:35.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.944 "adrfam": "ipv4", 00:27:35.944 "trsvcid": "$NVMF_PORT", 00:27:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.944 "hdgst": ${hdgst:-false}, 00:27:35.944 "ddgst": ${ddgst:-false} 00:27:35.944 }, 00:27:35.944 "method": "bdev_nvme_attach_controller" 00:27:35.944 } 00:27:35.944 EOF 00:27:35.944 )") 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.944 { 00:27:35.944 "params": { 00:27:35.944 "name": "Nvme$subsystem", 00:27:35.944 "trtype": "$TEST_TRANSPORT", 00:27:35.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.944 "adrfam": "ipv4", 00:27:35.944 "trsvcid": "$NVMF_PORT", 00:27:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.944 "hdgst": ${hdgst:-false}, 00:27:35.944 "ddgst": ${ddgst:-false} 00:27:35.944 }, 00:27:35.944 "method": "bdev_nvme_attach_controller" 00:27:35.944 } 00:27:35.944 EOF 00:27:35.944 )") 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.944 { 00:27:35.944 "params": { 00:27:35.944 "name": "Nvme$subsystem", 00:27:35.944 "trtype": "$TEST_TRANSPORT", 00:27:35.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.944 "adrfam": "ipv4", 00:27:35.944 "trsvcid": "$NVMF_PORT", 00:27:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.944 "hdgst": ${hdgst:-false}, 00:27:35.944 "ddgst": ${ddgst:-false} 00:27:35.944 }, 00:27:35.944 "method": "bdev_nvme_attach_controller" 00:27:35.944 } 00:27:35.944 EOF 00:27:35.944 )") 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.944 { 00:27:35.944 "params": { 00:27:35.944 "name": "Nvme$subsystem", 00:27:35.944 "trtype": "$TEST_TRANSPORT", 00:27:35.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.944 "adrfam": "ipv4", 00:27:35.944 "trsvcid": "$NVMF_PORT", 00:27:35.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.944 "hdgst": ${hdgst:-false}, 00:27:35.944 "ddgst": ${ddgst:-false} 00:27:35.944 }, 00:27:35.944 "method": "bdev_nvme_attach_controller" 00:27:35.944 } 00:27:35.944 EOF 00:27:35.944 )") 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.944 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.945 { 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme$subsystem", 00:27:35.945 "trtype": "$TEST_TRANSPORT", 00:27:35.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "$NVMF_PORT", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.945 "hdgst": ${hdgst:-false}, 00:27:35.945 "ddgst": ${ddgst:-false} 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 } 00:27:35.945 EOF 00:27:35.945 )") 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.945 { 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme$subsystem", 00:27:35.945 "trtype": "$TEST_TRANSPORT", 00:27:35.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "$NVMF_PORT", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.945 "hdgst": ${hdgst:-false}, 00:27:35.945 "ddgst": ${ddgst:-false} 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 } 00:27:35.945 EOF 00:27:35.945 )") 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.945 { 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme$subsystem", 00:27:35.945 "trtype": "$TEST_TRANSPORT", 00:27:35.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "$NVMF_PORT", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.945 "hdgst": ${hdgst:-false}, 00:27:35.945 "ddgst": ${ddgst:-false} 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 } 00:27:35.945 EOF 00:27:35.945 )") 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.945 { 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme$subsystem", 00:27:35.945 "trtype": "$TEST_TRANSPORT", 00:27:35.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "$NVMF_PORT", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.945 "hdgst": ${hdgst:-false}, 00:27:35.945 "ddgst": ${ddgst:-false} 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 } 00:27:35.945 EOF 00:27:35.945 )") 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.945 { 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme$subsystem", 00:27:35.945 "trtype": "$TEST_TRANSPORT", 00:27:35.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "$NVMF_PORT", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.945 "hdgst": ${hdgst:-false}, 00:27:35.945 "ddgst": ${ddgst:-false} 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 } 00:27:35.945 EOF 00:27:35.945 )") 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:35.945 { 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme$subsystem", 00:27:35.945 "trtype": "$TEST_TRANSPORT", 00:27:35.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "$NVMF_PORT", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.945 "hdgst": ${hdgst:-false}, 00:27:35.945 "ddgst": ${ddgst:-false} 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 } 00:27:35.945 EOF 00:27:35.945 )") 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:35.945 19:00:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme1", 00:27:35.945 "trtype": "tcp", 00:27:35.945 "traddr": "10.0.0.2", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "4420", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.945 "hdgst": false, 00:27:35.945 "ddgst": false 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 },{ 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme2", 00:27:35.945 "trtype": "tcp", 00:27:35.945 "traddr": "10.0.0.2", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "4420", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:35.945 "hdgst": false, 00:27:35.945 "ddgst": false 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 },{ 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme3", 00:27:35.945 "trtype": "tcp", 00:27:35.945 "traddr": "10.0.0.2", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "4420", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:35.945 "hdgst": false, 00:27:35.945 "ddgst": false 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 },{ 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme4", 00:27:35.945 "trtype": "tcp", 00:27:35.945 "traddr": "10.0.0.2", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "4420", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:35.945 "hdgst": false, 00:27:35.945 "ddgst": false 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 },{ 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme5", 00:27:35.945 "trtype": "tcp", 00:27:35.945 "traddr": "10.0.0.2", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "4420", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:35.945 "hdgst": false, 00:27:35.945 "ddgst": false 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 },{ 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme6", 00:27:35.945 "trtype": "tcp", 00:27:35.945 "traddr": "10.0.0.2", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "4420", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:35.945 "hdgst": false, 00:27:35.945 "ddgst": false 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 },{ 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme7", 00:27:35.945 "trtype": "tcp", 00:27:35.945 "traddr": "10.0.0.2", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "4420", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:35.945 "hdgst": false, 00:27:35.945 "ddgst": false 00:27:35.945 }, 00:27:35.945 "method": "bdev_nvme_attach_controller" 00:27:35.945 },{ 00:27:35.945 "params": { 00:27:35.945 "name": "Nvme8", 00:27:35.945 "trtype": "tcp", 00:27:35.945 "traddr": "10.0.0.2", 00:27:35.945 "adrfam": "ipv4", 00:27:35.945 "trsvcid": "4420", 00:27:35.945 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:35.945 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:35.945 "hdgst": false, 00:27:35.945 "ddgst": false 00:27:35.945 }, 00:27:35.946 "method": "bdev_nvme_attach_controller" 00:27:35.946 },{ 00:27:35.946 "params": { 00:27:35.946 "name": "Nvme9", 00:27:35.946 "trtype": "tcp", 00:27:35.946 "traddr": "10.0.0.2", 00:27:35.946 "adrfam": "ipv4", 00:27:35.946 "trsvcid": "4420", 00:27:35.946 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:35.946 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:35.946 "hdgst": false, 00:27:35.946 "ddgst": false 00:27:35.946 }, 00:27:35.946 "method": "bdev_nvme_attach_controller" 00:27:35.946 },{ 00:27:35.946 "params": { 00:27:35.946 "name": "Nvme10", 00:27:35.946 "trtype": "tcp", 00:27:35.946 "traddr": "10.0.0.2", 00:27:35.946 "adrfam": "ipv4", 00:27:35.946 "trsvcid": "4420", 00:27:35.946 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:35.946 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:35.946 "hdgst": false, 00:27:35.946 "ddgst": false 00:27:35.946 }, 00:27:35.946 "method": "bdev_nvme_attach_controller" 00:27:35.946 }' 00:27:35.946 [2024-07-25 19:00:47.653875] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:35.946 [2024-07-25 19:00:47.653961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618360 ] 00:27:35.946 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.946 [2024-07-25 19:00:47.721574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.946 [2024-07-25 19:00:47.811830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.850 Running I/O for 1 seconds... 00:27:38.782 00:27:38.782 Latency(us) 00:27:38.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.782 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.782 Verification LBA range: start 0x0 length 0x400 00:27:38.782 Nvme1n1 : 1.10 260.01 16.25 0.00 0.00 234293.15 24758.04 240784.12 00:27:38.782 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.782 Verification LBA range: start 0x0 length 0x400 00:27:38.782 Nvme2n1 : 1.14 223.68 13.98 0.00 0.00 278788.74 21942.42 257872.02 00:27:38.782 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.782 Verification LBA range: start 0x0 length 0x400 00:27:38.782 Nvme3n1 : 1.11 235.61 14.73 0.00 0.00 259246.87 5121.52 257872.02 00:27:38.782 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.782 Verification LBA range: start 0x0 length 0x400 00:27:38.782 Nvme4n1 : 1.16 275.54 17.22 0.00 0.00 218387.34 17087.91 253211.69 00:27:38.782 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.782 Verification LBA range: start 0x0 length 0x400 00:27:38.782 Nvme5n1 : 1.16 221.00 13.81 0.00 0.00 268242.30 23495.87 265639.25 00:27:38.782 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.782 Verification LBA range: start 0x0 length 0x400 00:27:38.782 Nvme6n1 : 1.15 222.07 13.88 0.00 0.00 262626.80 25243.50 239230.67 00:27:38.782 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.782 Verification LBA range: start 0x0 length 0x400 00:27:38.782 Nvme7n1 : 1.13 225.79 14.11 0.00 0.00 253490.06 55147.33 215928.98 00:27:38.782 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.782 Verification LBA range: start 0x0 length 0x400 00:27:38.782 Nvme8n1 : 1.14 225.29 14.08 0.00 0.00 249669.78 37282.70 233016.89 00:27:38.782 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.783 Verification LBA range: start 0x0 length 0x400 00:27:38.783 Nvme9n1 : 1.17 275.29 17.21 0.00 0.00 201549.18 1577.72 248551.35 00:27:38.783 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:38.783 Verification LBA range: start 0x0 length 0x400 00:27:38.783 Nvme10n1 : 1.18 227.12 14.19 0.00 0.00 238912.47 2694.26 293601.28 00:27:38.783 =================================================================================================================== 00:27:38.783 Total : 2391.39 149.46 0.00 0.00 244620.59 1577.72 293601.28 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.042 rmmod nvme_tcp 00:27:39.042 rmmod nvme_fabrics 00:27:39.042 rmmod nvme_keyring 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3617760 ']' 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3617760 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3617760 ']' 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3617760 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3617760 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3617760' 00:27:39.042 killing process with pid 3617760 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3617760 00:27:39.042 19:00:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3617760 00:27:39.610 19:00:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:39.610 19:00:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.610 19:00:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.610 19:00:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.611 19:00:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.611 19:00:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.611 19:00:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.611 19:00:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:42.146 00:27:42.146 real 0m11.732s 00:27:42.146 user 0m34.274s 00:27:42.146 sys 0m3.154s 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.146 ************************************ 00:27:42.146 END TEST nvmf_shutdown_tc1 00:27:42.146 ************************************ 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:42.146 ************************************ 00:27:42.146 START TEST nvmf_shutdown_tc2 00:27:42.146 ************************************ 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:42.146 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:42.146 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:42.146 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.146 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:42.147 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:42.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:27:42.147 00:27:42.147 --- 10.0.0.2 ping statistics --- 00:27:42.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.147 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:27:42.147 00:27:42.147 --- 10.0.0.1 ping statistics --- 00:27:42.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.147 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3619130 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3619130 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3619130 ']' 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:42.147 19:00:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.147 [2024-07-25 19:00:53.757016] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:42.147 [2024-07-25 19:00:53.757124] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.147 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.147 [2024-07-25 19:00:53.820083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.147 [2024-07-25 19:00:53.904700] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.147 [2024-07-25 19:00:53.904750] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.147 [2024-07-25 19:00:53.904778] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.147 [2024-07-25 19:00:53.904790] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.147 [2024-07-25 19:00:53.904800] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.147 [2024-07-25 19:00:53.904890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.147 [2024-07-25 19:00:53.904954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.147 [2024-07-25 19:00:53.905020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:42.147 [2024-07-25 19:00:53.905022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.408 [2024-07-25 19:00:54.060928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.408 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.408 Malloc1 00:27:42.408 [2024-07-25 19:00:54.140916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.408 Malloc2 00:27:42.408 Malloc3 00:27:42.408 Malloc4 00:27:42.667 Malloc5 00:27:42.667 Malloc6 00:27:42.667 Malloc7 00:27:42.667 Malloc8 00:27:42.667 Malloc9 00:27:42.927 Malloc10 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3619309 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3619309 /var/tmp/bdevperf.sock 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3619309 ']' 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:42.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.927 { 00:27:42.927 "params": { 00:27:42.927 "name": "Nvme$subsystem", 00:27:42.927 "trtype": "$TEST_TRANSPORT", 00:27:42.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.927 "adrfam": "ipv4", 00:27:42.927 "trsvcid": "$NVMF_PORT", 00:27:42.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.927 "hdgst": ${hdgst:-false}, 00:27:42.927 "ddgst": ${ddgst:-false} 00:27:42.927 }, 00:27:42.927 "method": "bdev_nvme_attach_controller" 00:27:42.927 } 00:27:42.927 EOF 00:27:42.927 )") 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.927 { 00:27:42.927 "params": { 00:27:42.927 "name": "Nvme$subsystem", 00:27:42.927 "trtype": "$TEST_TRANSPORT", 00:27:42.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.927 "adrfam": "ipv4", 00:27:42.927 "trsvcid": "$NVMF_PORT", 00:27:42.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.927 "hdgst": ${hdgst:-false}, 00:27:42.927 "ddgst": ${ddgst:-false} 00:27:42.927 }, 00:27:42.927 "method": "bdev_nvme_attach_controller" 00:27:42.927 } 00:27:42.927 EOF 00:27:42.927 )") 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.927 { 00:27:42.927 "params": { 00:27:42.927 "name": "Nvme$subsystem", 00:27:42.927 "trtype": "$TEST_TRANSPORT", 00:27:42.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.927 "adrfam": "ipv4", 00:27:42.927 "trsvcid": "$NVMF_PORT", 00:27:42.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.927 "hdgst": ${hdgst:-false}, 00:27:42.927 "ddgst": ${ddgst:-false} 00:27:42.927 }, 00:27:42.927 "method": "bdev_nvme_attach_controller" 00:27:42.927 } 00:27:42.927 EOF 00:27:42.927 )") 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.927 { 00:27:42.927 "params": { 00:27:42.927 "name": "Nvme$subsystem", 00:27:42.927 "trtype": "$TEST_TRANSPORT", 00:27:42.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.927 "adrfam": "ipv4", 00:27:42.927 "trsvcid": "$NVMF_PORT", 00:27:42.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.927 "hdgst": ${hdgst:-false}, 00:27:42.927 "ddgst": ${ddgst:-false} 00:27:42.927 }, 00:27:42.927 "method": "bdev_nvme_attach_controller" 00:27:42.927 } 00:27:42.927 EOF 00:27:42.927 )") 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.927 { 00:27:42.927 "params": { 00:27:42.927 "name": "Nvme$subsystem", 00:27:42.927 "trtype": "$TEST_TRANSPORT", 00:27:42.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.927 "adrfam": "ipv4", 00:27:42.927 "trsvcid": "$NVMF_PORT", 00:27:42.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.927 "hdgst": ${hdgst:-false}, 00:27:42.927 "ddgst": ${ddgst:-false} 00:27:42.927 }, 00:27:42.927 "method": "bdev_nvme_attach_controller" 00:27:42.927 } 00:27:42.927 EOF 00:27:42.927 )") 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.927 { 00:27:42.927 "params": { 00:27:42.927 "name": "Nvme$subsystem", 00:27:42.927 "trtype": "$TEST_TRANSPORT", 00:27:42.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.927 "adrfam": "ipv4", 00:27:42.927 "trsvcid": "$NVMF_PORT", 00:27:42.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.927 "hdgst": ${hdgst:-false}, 00:27:42.927 "ddgst": ${ddgst:-false} 00:27:42.927 }, 00:27:42.927 "method": "bdev_nvme_attach_controller" 00:27:42.927 } 00:27:42.927 EOF 00:27:42.927 )") 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.927 { 00:27:42.927 "params": { 00:27:42.927 "name": "Nvme$subsystem", 00:27:42.927 "trtype": "$TEST_TRANSPORT", 00:27:42.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.927 "adrfam": "ipv4", 00:27:42.927 "trsvcid": "$NVMF_PORT", 00:27:42.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.927 "hdgst": ${hdgst:-false}, 00:27:42.927 "ddgst": ${ddgst:-false} 00:27:42.927 }, 00:27:42.927 "method": "bdev_nvme_attach_controller" 00:27:42.927 } 00:27:42.927 EOF 00:27:42.927 )") 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.927 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.927 { 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme$subsystem", 00:27:42.928 "trtype": "$TEST_TRANSPORT", 00:27:42.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "$NVMF_PORT", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.928 "hdgst": ${hdgst:-false}, 00:27:42.928 "ddgst": ${ddgst:-false} 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 } 00:27:42.928 EOF 00:27:42.928 )") 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.928 { 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme$subsystem", 00:27:42.928 "trtype": "$TEST_TRANSPORT", 00:27:42.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "$NVMF_PORT", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.928 "hdgst": ${hdgst:-false}, 00:27:42.928 "ddgst": ${ddgst:-false} 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 } 00:27:42.928 EOF 00:27:42.928 )") 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.928 { 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme$subsystem", 00:27:42.928 "trtype": "$TEST_TRANSPORT", 00:27:42.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "$NVMF_PORT", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.928 "hdgst": ${hdgst:-false}, 00:27:42.928 "ddgst": ${ddgst:-false} 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 } 00:27:42.928 EOF 00:27:42.928 )") 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:42.928 19:00:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme1", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme2", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme3", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme4", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme5", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme6", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme7", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme8", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme9", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 },{ 00:27:42.928 "params": { 00:27:42.928 "name": "Nvme10", 00:27:42.928 "trtype": "tcp", 00:27:42.928 "traddr": "10.0.0.2", 00:27:42.928 "adrfam": "ipv4", 00:27:42.928 "trsvcid": "4420", 00:27:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:42.928 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:42.928 "hdgst": false, 00:27:42.928 "ddgst": false 00:27:42.928 }, 00:27:42.928 "method": "bdev_nvme_attach_controller" 00:27:42.928 }' 00:27:42.928 [2024-07-25 19:00:54.663713] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:42.928 [2024-07-25 19:00:54.663785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619309 ] 00:27:42.928 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.928 [2024-07-25 19:00:54.726659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.187 [2024-07-25 19:00:54.813252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.562 Running I/O for 10 seconds... 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3619309 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3619309 ']' 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3619309 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:44.820 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3619309 00:27:45.079 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:45.079 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:45.079 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3619309' 00:27:45.079 killing process with pid 3619309 00:27:45.079 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3619309 00:27:45.079 19:00:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3619309 00:27:45.079 Received shutdown signal, test time was about 0.774408 seconds 00:27:45.079 00:27:45.079 Latency(us) 00:27:45.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.079 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme1n1 : 0.71 269.17 16.82 0.00 0.00 233926.35 17379.18 250104.79 00:27:45.079 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme2n1 : 0.72 265.41 16.59 0.00 0.00 231439.42 21456.97 245444.46 00:27:45.079 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme3n1 : 0.71 268.58 16.79 0.00 0.00 221876.27 24855.13 248551.35 00:27:45.079 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme4n1 : 0.73 263.81 16.49 0.00 0.00 220613.85 23398.78 254765.13 00:27:45.079 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme5n1 : 0.73 262.61 16.41 0.00 0.00 215765.14 21262.79 253211.69 00:27:45.079 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme6n1 : 0.69 195.12 12.19 0.00 0.00 275503.81 4320.52 215928.98 00:27:45.079 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme7n1 : 0.77 248.20 15.51 0.00 0.00 205008.59 21165.70 228356.55 00:27:45.079 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme8n1 : 0.70 183.20 11.45 0.00 0.00 280400.02 41943.04 237677.23 00:27:45.079 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme9n1 : 0.69 184.31 11.52 0.00 0.00 267685.74 28350.39 242337.56 00:27:45.079 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.079 Verification LBA range: start 0x0 length 0x400 00:27:45.079 Nvme10n1 : 0.71 181.13 11.32 0.00 0.00 266501.69 42525.58 271853.04 00:27:45.079 =================================================================================================================== 00:27:45.080 Total : 2321.54 145.10 0.00 0.00 237317.23 4320.52 271853.04 00:27:45.339 19:00:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3619130 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.274 rmmod nvme_tcp 00:27:46.274 rmmod nvme_fabrics 00:27:46.274 rmmod nvme_keyring 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3619130 ']' 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3619130 00:27:46.274 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3619130 ']' 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3619130 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3619130 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3619130' 00:27:46.275 killing process with pid 3619130 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3619130 00:27:46.275 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3619130 00:27:46.843 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:46.843 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:46.843 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:46.843 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.843 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.843 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.843 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.843 19:00:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:49.381 00:27:49.381 real 0m7.168s 00:27:49.381 user 0m20.515s 00:27:49.381 sys 0m1.452s 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.381 ************************************ 00:27:49.381 END TEST nvmf_shutdown_tc2 00:27:49.381 ************************************ 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:49.381 ************************************ 00:27:49.381 START TEST nvmf_shutdown_tc3 00:27:49.381 ************************************ 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:49.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:49.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:49.381 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:49.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:49.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:49.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:27:49.382 00:27:49.382 --- 10.0.0.2 ping statistics --- 00:27:49.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.382 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:27:49.382 00:27:49.382 --- 10.0.0.1 ping statistics --- 00:27:49.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.382 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3620114 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3620114 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3620114 ']' 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:49.382 19:01:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.382 [2024-07-25 19:01:00.971645] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:49.382 [2024-07-25 19:01:00.971733] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.382 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.382 [2024-07-25 19:01:01.039681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.382 [2024-07-25 19:01:01.131479] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.382 [2024-07-25 19:01:01.131545] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.382 [2024-07-25 19:01:01.131561] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.382 [2024-07-25 19:01:01.131574] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.382 [2024-07-25 19:01:01.131595] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.382 [2024-07-25 19:01:01.131681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.382 [2024-07-25 19:01:01.131795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.382 [2024-07-25 19:01:01.131861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:49.382 [2024-07-25 19:01:01.131864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.382 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:49.382 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:49.382 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:49.382 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.382 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.641 [2024-07-25 19:01:01.282027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.641 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.641 Malloc1 00:27:49.641 [2024-07-25 19:01:01.371422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.641 Malloc2 00:27:49.641 Malloc3 00:27:49.641 Malloc4 00:27:49.899 Malloc5 00:27:49.899 Malloc6 00:27:49.899 Malloc7 00:27:49.899 Malloc8 00:27:49.899 Malloc9 00:27:49.899 Malloc10 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3620268 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3620268 /var/tmp/bdevperf.sock 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3620268 ']' 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.158 { 00:27:50.158 "params": { 00:27:50.158 "name": "Nvme$subsystem", 00:27:50.158 "trtype": "$TEST_TRANSPORT", 00:27:50.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.158 "adrfam": "ipv4", 00:27:50.158 "trsvcid": "$NVMF_PORT", 00:27:50.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.158 "hdgst": ${hdgst:-false}, 00:27:50.158 "ddgst": ${ddgst:-false} 00:27:50.158 }, 00:27:50.158 "method": "bdev_nvme_attach_controller" 00:27:50.158 } 00:27:50.158 EOF 00:27:50.158 )") 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.158 { 00:27:50.158 "params": { 00:27:50.158 "name": "Nvme$subsystem", 00:27:50.158 "trtype": "$TEST_TRANSPORT", 00:27:50.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.158 "adrfam": "ipv4", 00:27:50.158 "trsvcid": "$NVMF_PORT", 00:27:50.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.158 "hdgst": ${hdgst:-false}, 00:27:50.158 "ddgst": ${ddgst:-false} 00:27:50.158 }, 00:27:50.158 "method": "bdev_nvme_attach_controller" 00:27:50.158 } 00:27:50.158 EOF 00:27:50.158 )") 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.158 { 00:27:50.158 "params": { 00:27:50.158 "name": "Nvme$subsystem", 00:27:50.158 "trtype": "$TEST_TRANSPORT", 00:27:50.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.158 "adrfam": "ipv4", 00:27:50.158 "trsvcid": "$NVMF_PORT", 00:27:50.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.158 "hdgst": ${hdgst:-false}, 00:27:50.158 "ddgst": ${ddgst:-false} 00:27:50.158 }, 00:27:50.158 "method": "bdev_nvme_attach_controller" 00:27:50.158 } 00:27:50.158 EOF 00:27:50.158 )") 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.158 { 00:27:50.158 "params": { 00:27:50.158 "name": "Nvme$subsystem", 00:27:50.158 "trtype": "$TEST_TRANSPORT", 00:27:50.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.158 "adrfam": "ipv4", 00:27:50.158 "trsvcid": "$NVMF_PORT", 00:27:50.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.158 "hdgst": ${hdgst:-false}, 00:27:50.158 "ddgst": ${ddgst:-false} 00:27:50.158 }, 00:27:50.158 "method": "bdev_nvme_attach_controller" 00:27:50.158 } 00:27:50.158 EOF 00:27:50.158 )") 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.158 { 00:27:50.158 "params": { 00:27:50.158 "name": "Nvme$subsystem", 00:27:50.158 "trtype": "$TEST_TRANSPORT", 00:27:50.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.158 "adrfam": "ipv4", 00:27:50.158 "trsvcid": "$NVMF_PORT", 00:27:50.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.158 "hdgst": ${hdgst:-false}, 00:27:50.158 "ddgst": ${ddgst:-false} 00:27:50.158 }, 00:27:50.158 "method": "bdev_nvme_attach_controller" 00:27:50.158 } 00:27:50.158 EOF 00:27:50.158 )") 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.158 { 00:27:50.158 "params": { 00:27:50.158 "name": "Nvme$subsystem", 00:27:50.158 "trtype": "$TEST_TRANSPORT", 00:27:50.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.158 "adrfam": "ipv4", 00:27:50.158 "trsvcid": "$NVMF_PORT", 00:27:50.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.158 "hdgst": ${hdgst:-false}, 00:27:50.158 "ddgst": ${ddgst:-false} 00:27:50.158 }, 00:27:50.158 "method": "bdev_nvme_attach_controller" 00:27:50.158 } 00:27:50.158 EOF 00:27:50.158 )") 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.158 { 00:27:50.158 "params": { 00:27:50.158 "name": "Nvme$subsystem", 00:27:50.158 "trtype": "$TEST_TRANSPORT", 00:27:50.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.158 "adrfam": "ipv4", 00:27:50.158 "trsvcid": "$NVMF_PORT", 00:27:50.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.158 "hdgst": ${hdgst:-false}, 00:27:50.158 "ddgst": ${ddgst:-false} 00:27:50.158 }, 00:27:50.158 "method": "bdev_nvme_attach_controller" 00:27:50.158 } 00:27:50.158 EOF 00:27:50.158 )") 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.158 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.158 { 00:27:50.158 "params": { 00:27:50.158 "name": "Nvme$subsystem", 00:27:50.158 "trtype": "$TEST_TRANSPORT", 00:27:50.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.158 "adrfam": "ipv4", 00:27:50.158 "trsvcid": "$NVMF_PORT", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.159 "hdgst": ${hdgst:-false}, 00:27:50.159 "ddgst": ${ddgst:-false} 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 } 00:27:50.159 EOF 00:27:50.159 )") 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.159 { 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme$subsystem", 00:27:50.159 "trtype": "$TEST_TRANSPORT", 00:27:50.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "$NVMF_PORT", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.159 "hdgst": ${hdgst:-false}, 00:27:50.159 "ddgst": ${ddgst:-false} 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 } 00:27:50.159 EOF 00:27:50.159 )") 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.159 { 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme$subsystem", 00:27:50.159 "trtype": "$TEST_TRANSPORT", 00:27:50.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "$NVMF_PORT", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.159 "hdgst": ${hdgst:-false}, 00:27:50.159 "ddgst": ${ddgst:-false} 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 } 00:27:50.159 EOF 00:27:50.159 )") 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:50.159 19:01:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme1", 00:27:50.159 "trtype": "tcp", 00:27:50.159 "traddr": "10.0.0.2", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "4420", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.159 "hdgst": false, 00:27:50.159 "ddgst": false 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 },{ 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme2", 00:27:50.159 "trtype": "tcp", 00:27:50.159 "traddr": "10.0.0.2", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "4420", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:50.159 "hdgst": false, 00:27:50.159 "ddgst": false 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 },{ 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme3", 00:27:50.159 "trtype": "tcp", 00:27:50.159 "traddr": "10.0.0.2", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "4420", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:50.159 "hdgst": false, 00:27:50.159 "ddgst": false 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 },{ 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme4", 00:27:50.159 "trtype": "tcp", 00:27:50.159 "traddr": "10.0.0.2", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "4420", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:50.159 "hdgst": false, 00:27:50.159 "ddgst": false 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 },{ 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme5", 00:27:50.159 "trtype": "tcp", 00:27:50.159 "traddr": "10.0.0.2", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "4420", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:50.159 "hdgst": false, 00:27:50.159 "ddgst": false 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 },{ 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme6", 00:27:50.159 "trtype": "tcp", 00:27:50.159 "traddr": "10.0.0.2", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "4420", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:50.159 "hdgst": false, 00:27:50.159 "ddgst": false 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 },{ 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme7", 00:27:50.159 "trtype": "tcp", 00:27:50.159 "traddr": "10.0.0.2", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "4420", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:50.159 "hdgst": false, 00:27:50.159 "ddgst": false 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 },{ 00:27:50.159 "params": { 00:27:50.159 "name": "Nvme8", 00:27:50.159 "trtype": "tcp", 00:27:50.159 "traddr": "10.0.0.2", 00:27:50.159 "adrfam": "ipv4", 00:27:50.159 "trsvcid": "4420", 00:27:50.159 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:50.159 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:50.159 "hdgst": false, 00:27:50.159 "ddgst": false 00:27:50.159 }, 00:27:50.159 "method": "bdev_nvme_attach_controller" 00:27:50.159 },{ 00:27:50.159 "params": { 00:27:50.160 "name": "Nvme9", 00:27:50.160 "trtype": "tcp", 00:27:50.160 "traddr": "10.0.0.2", 00:27:50.160 "adrfam": "ipv4", 00:27:50.160 "trsvcid": "4420", 00:27:50.160 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:50.160 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:50.160 "hdgst": false, 00:27:50.160 "ddgst": false 00:27:50.160 }, 00:27:50.160 "method": "bdev_nvme_attach_controller" 00:27:50.160 },{ 00:27:50.160 "params": { 00:27:50.160 "name": "Nvme10", 00:27:50.160 "trtype": "tcp", 00:27:50.160 "traddr": "10.0.0.2", 00:27:50.160 "adrfam": "ipv4", 00:27:50.160 "trsvcid": "4420", 00:27:50.160 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:50.160 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:50.160 "hdgst": false, 00:27:50.160 "ddgst": false 00:27:50.160 }, 00:27:50.160 "method": "bdev_nvme_attach_controller" 00:27:50.160 }' 00:27:50.160 [2024-07-25 19:01:01.862599] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:50.160 [2024-07-25 19:01:01.862673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620268 ] 00:27:50.160 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.160 [2024-07-25 19:01:01.925926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.160 [2024-07-25 19:01:02.012782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.063 Running I/O for 10 seconds... 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.323 19:01:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.323 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:52.323 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:52.323 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:52.581 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:52.861 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:52.861 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:52.861 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:52.861 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:52.861 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.861 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.861 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.861 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3620114 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3620114 ']' 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3620114 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3620114 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3620114' 00:27:52.862 killing process with pid 3620114 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3620114 00:27:52.862 19:01:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3620114 00:27:52.862 [2024-07-25 19:01:04.631789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.631994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.632683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5420 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.634289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.634314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.634327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.862 [2024-07-25 19:01:04.634339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.634999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.635011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.635023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.635035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.635046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9f30 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.636995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.637008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.637020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.637032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.863 [2024-07-25 19:01:04.637044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.637560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da3d0 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.638996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.864 [2024-07-25 19:01:04.639259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.639601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20da890 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.865 [2024-07-25 19:01:04.640876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.640996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.641236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dad30 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.866 [2024-07-25 19:01:04.642942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.642953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.642965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.642977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.642989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.643231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84b00 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.644300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3ec0 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.644503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b400 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.644662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52de0 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.644843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.867 [2024-07-25 19:01:04.644972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.644986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2df0 is same with the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.645034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-25 19:01:04.645030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84fa0 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:52.867 the state(5) to be set 00:27:52.867 [2024-07-25 19:01:04.645064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.867 [2024-07-25 19:01:04.645068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84fa0 is same with the state(5) to be set 00:27:52.868 [2024-07-25 19:01:04.645082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84fa0 is same with the state(5) to be set 00:27:52.868 [2024-07-25 19:01:04.645096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84fa0 is same with [2024-07-25 19:01:04.645111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:27:52.868 id:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84fa0 is same with [2024-07-25 19:01:04.645130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:52.868 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d84fa0 is same with the state(5) to be set 00:27:52.868 [2024-07-25 19:01:04.645147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31cc0 is same with the state(5) to be set 00:27:52.868 [2024-07-25 19:01:04.645219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32a60 is same with the state(5) to be set 00:27:52.868 [2024-07-25 19:01:04.645387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.868 [2024-07-25 19:01:04.645492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50f30 is same with the state(5) to be set 00:27:52.868 [2024-07-25 19:01:04.645605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.645985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.645999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.646019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.646033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.646049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.868 [2024-07-25 19:01:04.646070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.868 [2024-07-25 19:01:04.646087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-07-25 19:01:04.646298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:01:04.646312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with [2024-07-25 19:01:04.646333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1the state(5) to be set 00:27:52.869 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:01:04.646443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:01:04.646602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 [2024-07-25 19:01:04.646655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.869 [2024-07-25 19:01:04.646668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.869 [2024-07-25 19:01:04.646680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-07-25 19:01:04.646681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.869 the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:01:04.646697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with [2024-07-25 19:01:04.646716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1the state(5) to be set 00:27:52.870 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.646731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with [2024-07-25 19:01:04.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:52.870 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.646746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.646759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.646772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.646785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.646798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-07-25 19:01:04.646811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:01:04.646826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.646853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.646866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.646879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.646892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.646908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.646921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-25 19:01:04.646934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:01:04.646949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.646976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.646989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.646995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.647009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.647025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-07-25 19:01:04.647027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.647041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 19:01:04.647042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.647057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.647064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with the state(5) to be set 00:27:52.870 [2024-07-25 19:01:04.647081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with [2024-07-25 19:01:04.647097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128the state(5) to be set 00:27:52.870 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4ac0 is same with [2024-07-25 19:01:04.647121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:52.870 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.870 [2024-07-25 19:01:04.647342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.870 [2024-07-25 19:01:04.647355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.647384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.647413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.647444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.647478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.647508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.647537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.647567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.647597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.647695] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17f5ff0 was disconnected and freed. reset controller. 00:27:52.871 [2024-07-25 19:01:04.647849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.647997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12[2024-07-25 19:01:04.648532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.648558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.648582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.648594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with [2024-07-25 19:01:04.648606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:12the state(5) to be set 00:27:52.871 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.871 [2024-07-25 19:01:04.648622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.871 [2024-07-25 19:01:04.648634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.871 [2024-07-25 19:01:04.648640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with [2024-07-25 19:01:04.648671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12the state(5) to be set 00:27:52.872 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12[2024-07-25 19:01:04.648737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with [2024-07-25 19:01:04.648752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:52.872 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4f60 is same with the state(5) to be set 00:27:52.872 [2024-07-25 19:01:04.648784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.648980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.648994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.872 [2024-07-25 19:01:04.649382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.872 [2024-07-25 19:01:04.649397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.649975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.649991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.873 [2024-07-25 19:01:04.650498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.873 [2024-07-25 19:01:04.650531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.873 [2024-07-25 19:01:04.650602] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cd5430 was disconnected and freed. reset controller. 00:27:52.873 [2024-07-25 19:01:04.653600] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:52.873 [2024-07-25 19:01:04.653643] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:52.873 [2024-07-25 19:01:04.653699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1721610 (9): Bad file descriptor 00:27:52.873 [2024-07-25 19:01:04.653725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2df0 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.655214] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.874 [2024-07-25 19:01:04.655375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-25 19:01:04.655404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2df0 with addr=10.0.0.2, port=4420 00:27:52.874 [2024-07-25 19:01:04.655421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2df0 is same with the state(5) to be set 00:27:52.874 [2024-07-25 19:01:04.655516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.874 [2024-07-25 19:01:04.655541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1721610 with addr=10.0.0.2, port=4420 00:27:52.874 [2024-07-25 19:01:04.655557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1721610 is same with the state(5) to be set 00:27:52.874 [2024-07-25 19:01:04.655603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.874 [2024-07-25 19:01:04.655624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.655639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.874 [2024-07-25 19:01:04.655653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.655667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.874 [2024-07-25 19:01:04.655681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.655695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.874 [2024-07-25 19:01:04.655708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.655721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcdcf0 is same with the state(5) to be set 00:27:52.874 [2024-07-25 19:01:04.655753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3ec0 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.655787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b400 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.655815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c52de0 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.655867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.874 [2024-07-25 19:01:04.655888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.655903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.874 [2024-07-25 19:01:04.655916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.655930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.874 [2024-07-25 19:01:04.655943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.655962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:52.874 [2024-07-25 19:01:04.655976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.655989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dac110 is same with the state(5) to be set 00:27:52.874 [2024-07-25 19:01:04.656020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31cc0 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.656050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c32a60 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.656089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c50f30 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.656204] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.874 [2024-07-25 19:01:04.656275] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.874 [2024-07-25 19:01:04.656410] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.874 [2024-07-25 19:01:04.656584] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.874 [2024-07-25 19:01:04.656666] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.874 [2024-07-25 19:01:04.656741] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:52.874 [2024-07-25 19:01:04.656797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2df0 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.656823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1721610 (9): Bad file descriptor 00:27:52.874 [2024-07-25 19:01:04.656889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.656910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.656933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.656949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.656965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.656979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.656995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.874 [2024-07-25 19:01:04.657558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.874 [2024-07-25 19:01:04.657571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.657974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.657988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.875 [2024-07-25 19:01:04.658772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.875 [2024-07-25 19:01:04.658786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.658802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.658816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.658832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.658846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.658862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.658876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.658891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd1470 is same with the state(5) to be set 00:27:52.876 [2024-07-25 19:01:04.658990] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cd1470 was disconnected and freed. reset controller. 00:27:52.876 [2024-07-25 19:01:04.659055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.876 [2024-07-25 19:01:04.659954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.876 [2024-07-25 19:01:04.659969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.659984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.660971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.660985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.661000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.661014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.661029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.661043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.877 [2024-07-25 19:01:04.661057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd3f30 is same with the state(5) to be set 00:27:52.877 [2024-07-25 19:01:04.661174] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cd3f30 was disconnected and freed. reset controller. 00:27:52.877 [2024-07-25 19:01:04.661259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:52.877 [2024-07-25 19:01:04.661281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:52.877 [2024-07-25 19:01:04.661297] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:52.877 [2024-07-25 19:01:04.661317] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:52.877 [2024-07-25 19:01:04.661331] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:52.877 [2024-07-25 19:01:04.661348] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:52.877 [2024-07-25 19:01:04.662597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.877 [2024-07-25 19:01:04.662621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.662977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.662992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.878 [2024-07-25 19:01:04.663804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.878 [2024-07-25 19:01:04.663818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.663833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.663848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.663863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.663877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.663893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.663907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.663923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.663937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.663953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.663967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.663983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.663997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.664572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.664587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd29e0 is same with the state(5) to be set 00:27:52.879 [2024-07-25 19:01:04.664669] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cd29e0 was disconnected and freed. reset controller. 00:27:52.879 [2024-07-25 19:01:04.665858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.879 [2024-07-25 19:01:04.665883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.879 [2024-07-25 19:01:04.665898] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:52.879 [2024-07-25 19:01:04.665923] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:52.879 [2024-07-25 19:01:04.666002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcdcf0 (9): Bad file descriptor 00:27:52.879 [2024-07-25 19:01:04.666054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dac110 (9): Bad file descriptor 00:27:52.879 [2024-07-25 19:01:04.666095] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.879 [2024-07-25 19:01:04.667322] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:52.879 [2024-07-25 19:01:04.667550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-25 19:01:04.667579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c32a60 with addr=10.0.0.2, port=4420 00:27:52.879 [2024-07-25 19:01:04.667597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32a60 is same with the state(5) to be set 00:27:52.879 [2024-07-25 19:01:04.667701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.879 [2024-07-25 19:01:04.667727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52de0 with addr=10.0.0.2, port=4420 00:27:52.879 [2024-07-25 19:01:04.667742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52de0 is same with the state(5) to be set 00:27:52.879 [2024-07-25 19:01:04.667801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.667821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.667843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.667859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.667875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.667890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.667906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.667921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.667943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.667959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.667975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.667989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.668005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.668019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.668035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.879 [2024-07-25 19:01:04.668050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.879 [2024-07-25 19:01:04.668074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.668981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.668995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.880 [2024-07-25 19:01:04.669269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.880 [2024-07-25 19:01:04.669283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.669757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.669772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7320 is same with the state(5) to be set 00:27:52.881 [2024-07-25 19:01:04.671313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.881 [2024-07-25 19:01:04.671804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.881 [2024-07-25 19:01:04.671819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.671834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.671849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.671863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.671879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.671893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.671909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.671923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.671939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.671953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.671969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.671983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.671999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.672979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.672995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.673009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.882 [2024-07-25 19:01:04.673024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.882 [2024-07-25 19:01:04.673038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.673079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.673110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.673140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.673171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.673202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.673232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.673263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.673293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.673308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06ef0 is same with the state(5) to be set 00:27:52.883 [2024-07-25 19:01:04.674830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.674858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.674880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.674896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.674914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.674928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.674945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.674959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.674975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.674989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.883 [2024-07-25 19:01:04.675777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.883 [2024-07-25 19:01:04.675791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.675807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.675820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.675836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.675850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.675866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.675880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.675896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.675910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.675926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.675940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.675955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.675969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.675985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.675999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.884 [2024-07-25 19:01:04.676785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.884 [2024-07-25 19:01:04.676800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d91e20 is same with the state(5) to be set 00:27:52.884 [2024-07-25 19:01:04.679276] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:52.884 [2024-07-25 19:01:04.679308] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:52.884 [2024-07-25 19:01:04.679330] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:52.884 [2024-07-25 19:01:04.679347] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:52.884 [2024-07-25 19:01:04.679624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.884 [2024-07-25 19:01:04.679654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c31cc0 with addr=10.0.0.2, port=4420 00:27:52.884 [2024-07-25 19:01:04.679671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31cc0 is same with the state(5) to be set 00:27:52.884 [2024-07-25 19:01:04.679696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c32a60 (9): Bad file descriptor 00:27:52.884 [2024-07-25 19:01:04.679715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c52de0 (9): Bad file descriptor 00:27:52.884 [2024-07-25 19:01:04.679784] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.884 [2024-07-25 19:01:04.679820] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.884 [2024-07-25 19:01:04.679842] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.884 [2024-07-25 19:01:04.679863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31cc0 (9): Bad file descriptor 00:27:52.885 [2024-07-25 19:01:04.680230] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:52.885 [2024-07-25 19:01:04.680381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.885 [2024-07-25 19:01:04.680408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1721610 with addr=10.0.0.2, port=4420 00:27:52.885 [2024-07-25 19:01:04.680425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1721610 is same with the state(5) to be set 00:27:52.885 [2024-07-25 19:01:04.680520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.885 [2024-07-25 19:01:04.680545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2df0 with addr=10.0.0.2, port=4420 00:27:52.885 [2024-07-25 19:01:04.680561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2df0 is same with the state(5) to be set 00:27:52.885 [2024-07-25 19:01:04.680655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.885 [2024-07-25 19:01:04.680680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2b400 with addr=10.0.0.2, port=4420 00:27:52.885 [2024-07-25 19:01:04.680695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b400 is same with the state(5) to be set 00:27:52.885 [2024-07-25 19:01:04.680798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.885 [2024-07-25 19:01:04.680822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c50f30 with addr=10.0.0.2, port=4420 00:27:52.885 [2024-07-25 19:01:04.680837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50f30 is same with the state(5) to be set 00:27:52.885 [2024-07-25 19:01:04.680855] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:52.885 [2024-07-25 19:01:04.680874] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:52.885 [2024-07-25 19:01:04.680890] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:52.885 [2024-07-25 19:01:04.680910] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:52.885 [2024-07-25 19:01:04.680924] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:52.885 [2024-07-25 19:01:04.680937] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:52.885 [2024-07-25 19:01:04.681536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.681970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.681985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.885 [2024-07-25 19:01:04.682297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.885 [2024-07-25 19:01:04.682311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.682971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.682987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.886 [2024-07-25 19:01:04.683512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.886 [2024-07-25 19:01:04.683527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd6930 is same with the state(5) to be set 00:27:52.887 [2024-07-25 19:01:04.684783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.684807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.684827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.684843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.684859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.684874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.684889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.684904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.684920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.684935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.684950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.684965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.684981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.684995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.887 [2024-07-25 19:01:04.685960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.887 [2024-07-25 19:01:04.685974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.685990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.888 [2024-07-25 19:01:04.686728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:52.888 [2024-07-25 19:01:04.686742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d90920 is same with the state(5) to be set 00:27:52.888 [2024-07-25 19:01:04.688670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.888 [2024-07-25 19:01:04.688697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.888 [2024-07-25 19:01:04.688715] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:52.888 task offset: 27776 on job bdev=Nvme1n1 fails 00:27:52.888 00:27:52.888 Latency(us) 00:27:52.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.888 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme1n1 ended in about 0.90 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme1n1 : 0.90 213.50 13.34 71.17 0.00 222238.15 18932.62 231463.44 00:27:52.888 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme2n1 ended in about 0.92 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme2n1 : 0.92 139.40 8.71 69.70 0.00 296532.45 21651.15 254765.13 00:27:52.888 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme3n1 ended in about 0.91 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme3n1 : 0.91 211.03 13.19 70.34 0.00 215700.29 13689.74 246997.90 00:27:52.888 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme4n1 ended in about 0.91 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme4n1 : 0.91 214.31 13.39 69.98 0.00 208999.72 9417.77 253211.69 00:27:52.888 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme5n1 ended in about 0.92 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme5n1 : 0.92 138.87 8.68 69.44 0.00 279358.32 35146.71 239230.67 00:27:52.888 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme6n1 ended in about 0.91 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme6n1 : 0.91 144.56 9.04 70.09 0.00 264843.46 18932.62 265639.25 00:27:52.888 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme7n1 ended in about 0.90 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme7n1 : 0.90 213.15 13.32 71.05 0.00 195055.03 8543.95 256318.58 00:27:52.888 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme8n1 ended in about 0.93 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme8n1 : 0.93 137.35 8.58 68.68 0.00 264553.05 17282.09 259425.47 00:27:52.888 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme9n1 ended in about 0.94 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme9n1 : 0.94 136.88 8.56 68.44 0.00 259918.76 21845.33 288940.94 00:27:52.888 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.888 Job: Nvme10n1 ended in about 0.93 seconds with error 00:27:52.888 Verification LBA range: start 0x0 length 0x400 00:27:52.888 Nvme10n1 : 0.93 138.35 8.65 69.17 0.00 250808.70 19515.16 257872.02 00:27:52.888 =================================================================================================================== 00:27:52.888 Total : 1687.41 105.46 698.06 0.00 241630.25 8543.95 288940.94 00:27:52.889 [2024-07-25 19:01:04.715807] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:52.889 [2024-07-25 19:01:04.715897] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:52.889 [2024-07-25 19:01:04.716178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.889 [2024-07-25 19:01:04.716215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3ec0 with addr=10.0.0.2, port=4420 00:27:52.889 [2024-07-25 19:01:04.716236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3ec0 is same with the state(5) to be set 00:27:52.889 [2024-07-25 19:01:04.716264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1721610 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.716293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2df0 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.716310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b400 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.716328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c50f30 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.716353] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.716366] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.716381] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:52.889 [2024-07-25 19:01:04.716573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.716731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.889 [2024-07-25 19:01:04.716760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dac110 with addr=10.0.0.2, port=4420 00:27:52.889 [2024-07-25 19:01:04.716777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dac110 is same with the state(5) to be set 00:27:52.889 [2024-07-25 19:01:04.716882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.889 [2024-07-25 19:01:04.716908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcdcf0 with addr=10.0.0.2, port=4420 00:27:52.889 [2024-07-25 19:01:04.716924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcdcf0 is same with the state(5) to be set 00:27:52.889 [2024-07-25 19:01:04.716955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3ec0 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.716973] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.716986] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.716998] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:52.889 [2024-07-25 19:01:04.717018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.717033] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.717046] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:52.889 [2024-07-25 19:01:04.717071] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.717088] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.717101] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:52.889 [2024-07-25 19:01:04.717117] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.717138] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.717151] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:52.889 [2024-07-25 19:01:04.717208] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.889 [2024-07-25 19:01:04.717229] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.889 [2024-07-25 19:01:04.717248] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.889 [2024-07-25 19:01:04.717267] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.889 [2024-07-25 19:01:04.717285] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.889 [2024-07-25 19:01:04.717928] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.717952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.717965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.717977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.718004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dac110 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.718033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcdcf0 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.718049] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.718070] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.718094] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:52.889 [2024-07-25 19:01:04.718443] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:52.889 [2024-07-25 19:01:04.718472] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:52.889 [2024-07-25 19:01:04.718489] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:52.889 [2024-07-25 19:01:04.718509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.718543] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.718558] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.718571] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:52.889 [2024-07-25 19:01:04.718587] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.718601] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.718613] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:52.889 [2024-07-25 19:01:04.718669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.718687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.718792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.889 [2024-07-25 19:01:04.718818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52de0 with addr=10.0.0.2, port=4420 00:27:52.889 [2024-07-25 19:01:04.718833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52de0 is same with the state(5) to be set 00:27:52.889 [2024-07-25 19:01:04.718927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.889 [2024-07-25 19:01:04.718953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c32a60 with addr=10.0.0.2, port=4420 00:27:52.889 [2024-07-25 19:01:04.718968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32a60 is same with the state(5) to be set 00:27:52.889 [2024-07-25 19:01:04.719079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.889 [2024-07-25 19:01:04.719107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c31cc0 with addr=10.0.0.2, port=4420 00:27:52.889 [2024-07-25 19:01:04.719123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31cc0 is same with the state(5) to be set 00:27:52.889 [2024-07-25 19:01:04.719167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c52de0 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.719191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c32a60 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.719208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31cc0 (9): Bad file descriptor 00:27:52.889 [2024-07-25 19:01:04.719245] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.719263] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.719277] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:52.889 [2024-07-25 19:01:04.719293] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.719307] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.719319] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:52.889 [2024-07-25 19:01:04.719334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:52.889 [2024-07-25 19:01:04.719347] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:52.889 [2024-07-25 19:01:04.719359] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:52.889 [2024-07-25 19:01:04.719402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.719421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.889 [2024-07-25 19:01:04.719433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.471 19:01:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:53.471 19:01:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3620268 00:27:54.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3620268) - No such process 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:54.409 rmmod nvme_tcp 00:27:54.409 rmmod nvme_fabrics 00:27:54.409 rmmod nvme_keyring 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.409 19:01:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.947 19:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:56.947 00:27:56.947 real 0m7.586s 00:27:56.947 user 0m18.808s 00:27:56.947 sys 0m1.452s 00:27:56.947 19:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:56.947 19:01:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 ************************************ 00:27:56.947 END TEST nvmf_shutdown_tc3 00:27:56.947 ************************************ 00:27:56.947 19:01:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:56.947 00:27:56.947 real 0m26.718s 00:27:56.947 user 1m13.698s 00:27:56.947 sys 0m6.204s 00:27:56.947 19:01:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:56.947 19:01:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 ************************************ 00:27:56.947 END TEST nvmf_shutdown 00:27:56.947 ************************************ 00:27:56.947 19:01:08 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:56.947 19:01:08 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.947 19:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 19:01:08 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:56.947 19:01:08 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:56.947 19:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 19:01:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:56.947 19:01:08 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:56.947 19:01:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:56.947 19:01:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:56.947 19:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.947 ************************************ 00:27:56.947 START TEST nvmf_multicontroller 00:27:56.947 ************************************ 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:56.947 * Looking for test storage... 00:27:56.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.947 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:56.948 19:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.852 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.852 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.852 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.852 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:27:58.852 00:27:58.852 --- 10.0.0.2 ping statistics --- 00:27:58.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.852 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:58.852 00:27:58.852 --- 10.0.0.1 ping statistics --- 00:27:58.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.852 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:58.852 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3622789 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3622789 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3622789 ']' 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:58.853 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.853 [2024-07-25 19:01:10.675885] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:58.853 [2024-07-25 19:01:10.675983] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.853 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.112 [2024-07-25 19:01:10.744379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:59.112 [2024-07-25 19:01:10.840630] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.112 [2024-07-25 19:01:10.840696] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.112 [2024-07-25 19:01:10.840712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.112 [2024-07-25 19:01:10.840725] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.112 [2024-07-25 19:01:10.840738] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.112 [2024-07-25 19:01:10.840831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.112 [2024-07-25 19:01:10.842080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.112 [2024-07-25 19:01:10.842092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.112 [2024-07-25 19:01:10.983433] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.112 19:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.371 Malloc0 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.371 [2024-07-25 19:01:11.043929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.371 [2024-07-25 19:01:11.051828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.371 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.372 Malloc1 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3622814 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3622814 /var/tmp/bdevperf.sock 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3622814 ']' 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:59.372 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.630 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:59.630 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:59.630 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:59.630 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.630 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.890 NVMe0n1 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.890 1 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.890 request: 00:27:59.890 { 00:27:59.890 "name": "NVMe0", 00:27:59.890 "trtype": "tcp", 00:27:59.890 "traddr": "10.0.0.2", 00:27:59.890 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:59.890 "hostaddr": "10.0.0.2", 00:27:59.890 "hostsvcid": "60000", 00:27:59.890 "adrfam": "ipv4", 00:27:59.890 "trsvcid": "4420", 00:27:59.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.890 "method": "bdev_nvme_attach_controller", 00:27:59.890 "req_id": 1 00:27:59.890 } 00:27:59.890 Got JSON-RPC error response 00:27:59.890 response: 00:27:59.890 { 00:27:59.890 "code": -114, 00:27:59.890 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:59.890 } 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.890 request: 00:27:59.890 { 00:27:59.890 "name": "NVMe0", 00:27:59.890 "trtype": "tcp", 00:27:59.890 "traddr": "10.0.0.2", 00:27:59.890 "hostaddr": "10.0.0.2", 00:27:59.890 "hostsvcid": "60000", 00:27:59.890 "adrfam": "ipv4", 00:27:59.890 "trsvcid": "4420", 00:27:59.890 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:59.890 "method": "bdev_nvme_attach_controller", 00:27:59.890 "req_id": 1 00:27:59.890 } 00:27:59.890 Got JSON-RPC error response 00:27:59.890 response: 00:27:59.890 { 00:27:59.890 "code": -114, 00:27:59.890 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:59.890 } 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.890 request: 00:27:59.890 { 00:27:59.890 "name": "NVMe0", 00:27:59.890 "trtype": "tcp", 00:27:59.890 "traddr": "10.0.0.2", 00:27:59.890 "hostaddr": "10.0.0.2", 00:27:59.890 "hostsvcid": "60000", 00:27:59.890 "adrfam": "ipv4", 00:27:59.890 "trsvcid": "4420", 00:27:59.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.890 "multipath": "disable", 00:27:59.890 "method": "bdev_nvme_attach_controller", 00:27:59.890 "req_id": 1 00:27:59.890 } 00:27:59.890 Got JSON-RPC error response 00:27:59.890 response: 00:27:59.890 { 00:27:59.890 "code": -114, 00:27:59.890 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:59.890 } 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:59.890 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.891 request: 00:27:59.891 { 00:27:59.891 "name": "NVMe0", 00:27:59.891 "trtype": "tcp", 00:27:59.891 "traddr": "10.0.0.2", 00:27:59.891 "hostaddr": "10.0.0.2", 00:27:59.891 "hostsvcid": "60000", 00:27:59.891 "adrfam": "ipv4", 00:27:59.891 "trsvcid": "4420", 00:27:59.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.891 "multipath": "failover", 00:27:59.891 "method": "bdev_nvme_attach_controller", 00:27:59.891 "req_id": 1 00:27:59.891 } 00:27:59.891 Got JSON-RPC error response 00:27:59.891 response: 00:27:59.891 { 00:27:59.891 "code": -114, 00:27:59.891 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:59.891 } 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.891 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.891 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.150 00:28:00.150 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.150 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:00.150 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:00.150 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.150 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.150 19:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.150 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:00.150 19:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:01.084 0 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3622814 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3622814 ']' 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3622814 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.084 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3622814 00:28:01.342 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:01.342 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:01.342 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3622814' 00:28:01.342 killing process with pid 3622814 00:28:01.342 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3622814 00:28:01.342 19:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3622814 00:28:01.342 19:01:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:01.342 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.342 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.342 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.342 19:01:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:01.343 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:01.343 [2024-07-25 19:01:11.152985] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:01.343 [2024-07-25 19:01:11.153097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3622814 ] 00:28:01.343 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.343 [2024-07-25 19:01:11.216730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.343 [2024-07-25 19:01:11.304727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.343 [2024-07-25 19:01:11.801206] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name d0c74e5f-122a-46fe-909f-78a6300411a2 already exists 00:28:01.343 [2024-07-25 19:01:11.801245] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:d0c74e5f-122a-46fe-909f-78a6300411a2 alias for bdev NVMe1n1 00:28:01.343 [2024-07-25 19:01:11.801278] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:01.343 Running I/O for 1 seconds... 00:28:01.343 00:28:01.343 Latency(us) 00:28:01.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.343 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:01.343 NVMe0n1 : 1.00 18427.43 71.98 0.00 0.00 6935.43 2087.44 12524.66 00:28:01.343 =================================================================================================================== 00:28:01.343 Total : 18427.43 71.98 0.00 0.00 6935.43 2087.44 12524.66 00:28:01.343 Received shutdown signal, test time was about 1.000000 seconds 00:28:01.343 00:28:01.343 Latency(us) 00:28:01.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.343 =================================================================================================================== 00:28:01.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.343 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.343 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.343 rmmod nvme_tcp 00:28:01.601 rmmod nvme_fabrics 00:28:01.601 rmmod nvme_keyring 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3622789 ']' 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3622789 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3622789 ']' 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3622789 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3622789 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3622789' 00:28:01.601 killing process with pid 3622789 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3622789 00:28:01.601 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3622789 00:28:01.859 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:01.859 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.859 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.859 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.859 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.859 19:01:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.859 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.859 19:01:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.761 19:01:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:03.761 00:28:03.761 real 0m7.138s 00:28:03.761 user 0m10.737s 00:28:03.761 sys 0m2.287s 00:28:03.761 19:01:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.761 19:01:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.761 ************************************ 00:28:03.761 END TEST nvmf_multicontroller 00:28:03.761 ************************************ 00:28:03.761 19:01:15 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:03.761 19:01:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:03.761 19:01:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:03.761 19:01:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:03.761 ************************************ 00:28:03.761 START TEST nvmf_aer 00:28:03.761 ************************************ 00:28:03.761 19:01:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:04.020 * Looking for test storage... 00:28:04.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.020 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.021 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.021 19:01:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.021 19:01:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.021 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:04.021 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:04.021 19:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:04.021 19:01:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.927 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.927 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:05.928 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:05.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:05.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:05.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.928 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:06.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:28:06.187 00:28:06.187 --- 10.0.0.2 ping statistics --- 00:28:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.187 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:28:06.187 00:28:06.187 --- 10.0.0.1 ping statistics --- 00:28:06.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.187 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3625021 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3625021 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3625021 ']' 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:06.187 19:01:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.187 [2024-07-25 19:01:17.914815] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:06.187 [2024-07-25 19:01:17.914901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.187 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.187 [2024-07-25 19:01:17.980603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.446 [2024-07-25 19:01:18.069501] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.446 [2024-07-25 19:01:18.069567] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.446 [2024-07-25 19:01:18.069580] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.446 [2024-07-25 19:01:18.069591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.446 [2024-07-25 19:01:18.069614] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.446 [2024-07-25 19:01:18.069708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.446 [2024-07-25 19:01:18.069774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.446 [2024-07-25 19:01:18.069840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.446 [2024-07-25 19:01:18.069845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.446 [2024-07-25 19:01:18.225842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.446 Malloc0 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.446 [2024-07-25 19:01:18.277287] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.446 [ 00:28:06.446 { 00:28:06.446 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.446 "subtype": "Discovery", 00:28:06.446 "listen_addresses": [], 00:28:06.446 "allow_any_host": true, 00:28:06.446 "hosts": [] 00:28:06.446 }, 00:28:06.446 { 00:28:06.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.446 "subtype": "NVMe", 00:28:06.446 "listen_addresses": [ 00:28:06.446 { 00:28:06.446 "trtype": "TCP", 00:28:06.446 "adrfam": "IPv4", 00:28:06.446 "traddr": "10.0.0.2", 00:28:06.446 "trsvcid": "4420" 00:28:06.446 } 00:28:06.446 ], 00:28:06.446 "allow_any_host": true, 00:28:06.446 "hosts": [], 00:28:06.446 "serial_number": "SPDK00000000000001", 00:28:06.446 "model_number": "SPDK bdev Controller", 00:28:06.446 "max_namespaces": 2, 00:28:06.446 "min_cntlid": 1, 00:28:06.446 "max_cntlid": 65519, 00:28:06.446 "namespaces": [ 00:28:06.446 { 00:28:06.446 "nsid": 1, 00:28:06.446 "bdev_name": "Malloc0", 00:28:06.446 "name": "Malloc0", 00:28:06.446 "nguid": "C3807A80E3584160A87ACC754E54FB27", 00:28:06.446 "uuid": "c3807a80-e358-4160-a87a-cc754e54fb27" 00:28:06.446 } 00:28:06.446 ] 00:28:06.446 } 00:28:06.446 ] 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3625162 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:06.446 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:06.706 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.706 Malloc1 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.706 Asynchronous Event Request test 00:28:06.706 Attaching to 10.0.0.2 00:28:06.706 Attached to 10.0.0.2 00:28:06.706 Registering asynchronous event callbacks... 00:28:06.706 Starting namespace attribute notice tests for all controllers... 00:28:06.706 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:06.706 aer_cb - Changed Namespace 00:28:06.706 Cleaning up... 00:28:06.706 [ 00:28:06.706 { 00:28:06.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.706 "subtype": "Discovery", 00:28:06.706 "listen_addresses": [], 00:28:06.706 "allow_any_host": true, 00:28:06.706 "hosts": [] 00:28:06.706 }, 00:28:06.706 { 00:28:06.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.706 "subtype": "NVMe", 00:28:06.706 "listen_addresses": [ 00:28:06.706 { 00:28:06.706 "trtype": "TCP", 00:28:06.706 "adrfam": "IPv4", 00:28:06.706 "traddr": "10.0.0.2", 00:28:06.706 "trsvcid": "4420" 00:28:06.706 } 00:28:06.706 ], 00:28:06.706 "allow_any_host": true, 00:28:06.706 "hosts": [], 00:28:06.706 "serial_number": "SPDK00000000000001", 00:28:06.706 "model_number": "SPDK bdev Controller", 00:28:06.706 "max_namespaces": 2, 00:28:06.706 "min_cntlid": 1, 00:28:06.706 "max_cntlid": 65519, 00:28:06.706 "namespaces": [ 00:28:06.706 { 00:28:06.706 "nsid": 1, 00:28:06.706 "bdev_name": "Malloc0", 00:28:06.706 "name": "Malloc0", 00:28:06.706 "nguid": "C3807A80E3584160A87ACC754E54FB27", 00:28:06.706 "uuid": "c3807a80-e358-4160-a87a-cc754e54fb27" 00:28:06.706 }, 00:28:06.706 { 00:28:06.706 "nsid": 2, 00:28:06.706 "bdev_name": "Malloc1", 00:28:06.706 "name": "Malloc1", 00:28:06.706 "nguid": "FC1AE8F2257843EB9041F46BC4DCC628", 00:28:06.706 "uuid": "fc1ae8f2-2578-43eb-9041-f46bc4dcc628" 00:28:06.706 } 00:28:06.706 ] 00:28:06.706 } 00:28:06.706 ] 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3625162 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.706 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.965 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.965 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:06.965 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.965 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.965 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.965 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.965 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.965 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.966 rmmod nvme_tcp 00:28:06.966 rmmod nvme_fabrics 00:28:06.966 rmmod nvme_keyring 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3625021 ']' 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3625021 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3625021 ']' 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3625021 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3625021 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3625021' 00:28:06.966 killing process with pid 3625021 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3625021 00:28:06.966 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3625021 00:28:07.224 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:07.224 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:07.224 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:07.224 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.224 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.224 19:01:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.224 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.224 19:01:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.131 19:01:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.131 00:28:09.131 real 0m5.357s 00:28:09.131 user 0m4.115s 00:28:09.131 sys 0m1.899s 00:28:09.132 19:01:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:09.132 19:01:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:09.132 ************************************ 00:28:09.132 END TEST nvmf_aer 00:28:09.132 ************************************ 00:28:09.132 19:01:20 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:09.132 19:01:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:09.132 19:01:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:09.132 19:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.390 ************************************ 00:28:09.390 START TEST nvmf_async_init 00:28:09.390 ************************************ 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:09.390 * Looking for test storage... 00:28:09.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2e8d451a394e4b57988e542efdc3be7d 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.390 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.391 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.391 19:01:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.391 19:01:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.294 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.294 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:11.294 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:11.294 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:11.294 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.295 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.295 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.295 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.295 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:11.295 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:11.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:28:11.554 00:28:11.554 --- 10.0.0.2 ping statistics --- 00:28:11.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.554 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:28:11.554 00:28:11.554 --- 10.0.0.1 ping statistics --- 00:28:11.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.554 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3627100 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3627100 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3627100 ']' 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:11.554 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.554 [2024-07-25 19:01:23.282541] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:11.554 [2024-07-25 19:01:23.282612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.554 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.554 [2024-07-25 19:01:23.343731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.554 [2024-07-25 19:01:23.426473] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.554 [2024-07-25 19:01:23.426528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.554 [2024-07-25 19:01:23.426557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.554 [2024-07-25 19:01:23.426568] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.554 [2024-07-25 19:01:23.426578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.554 [2024-07-25 19:01:23.426604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.811 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:11.811 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:11.811 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:11.811 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.811 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.811 19:01:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.811 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.812 [2024-07-25 19:01:23.573392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.812 null0 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2e8d451a394e4b57988e542efdc3be7d 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.812 [2024-07-25 19:01:23.613683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.812 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.076 nvme0n1 00:28:12.076 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.076 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:12.076 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.076 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.076 [ 00:28:12.076 { 00:28:12.076 "name": "nvme0n1", 00:28:12.076 "aliases": [ 00:28:12.076 "2e8d451a-394e-4b57-988e-542efdc3be7d" 00:28:12.076 ], 00:28:12.076 "product_name": "NVMe disk", 00:28:12.076 "block_size": 512, 00:28:12.076 "num_blocks": 2097152, 00:28:12.076 "uuid": "2e8d451a-394e-4b57-988e-542efdc3be7d", 00:28:12.076 "assigned_rate_limits": { 00:28:12.076 "rw_ios_per_sec": 0, 00:28:12.076 "rw_mbytes_per_sec": 0, 00:28:12.076 "r_mbytes_per_sec": 0, 00:28:12.076 "w_mbytes_per_sec": 0 00:28:12.076 }, 00:28:12.076 "claimed": false, 00:28:12.076 "zoned": false, 00:28:12.076 "supported_io_types": { 00:28:12.076 "read": true, 00:28:12.076 "write": true, 00:28:12.076 "unmap": false, 00:28:12.076 "write_zeroes": true, 00:28:12.076 "flush": true, 00:28:12.076 "reset": true, 00:28:12.076 "compare": true, 00:28:12.076 "compare_and_write": true, 00:28:12.076 "abort": true, 00:28:12.077 "nvme_admin": true, 00:28:12.077 "nvme_io": true 00:28:12.077 }, 00:28:12.077 "memory_domains": [ 00:28:12.077 { 00:28:12.077 "dma_device_id": "system", 00:28:12.077 "dma_device_type": 1 00:28:12.077 } 00:28:12.077 ], 00:28:12.077 "driver_specific": { 00:28:12.077 "nvme": [ 00:28:12.077 { 00:28:12.077 "trid": { 00:28:12.077 "trtype": "TCP", 00:28:12.077 "adrfam": "IPv4", 00:28:12.077 "traddr": "10.0.0.2", 00:28:12.077 "trsvcid": "4420", 00:28:12.077 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:12.077 }, 00:28:12.077 "ctrlr_data": { 00:28:12.077 "cntlid": 1, 00:28:12.077 "vendor_id": "0x8086", 00:28:12.077 "model_number": "SPDK bdev Controller", 00:28:12.077 "serial_number": "00000000000000000000", 00:28:12.077 "firmware_revision": "24.05.1", 00:28:12.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:12.077 "oacs": { 00:28:12.077 "security": 0, 00:28:12.077 "format": 0, 00:28:12.077 "firmware": 0, 00:28:12.077 "ns_manage": 0 00:28:12.077 }, 00:28:12.077 "multi_ctrlr": true, 00:28:12.077 "ana_reporting": false 00:28:12.077 }, 00:28:12.077 "vs": { 00:28:12.077 "nvme_version": "1.3" 00:28:12.077 }, 00:28:12.077 "ns_data": { 00:28:12.077 "id": 1, 00:28:12.077 "can_share": true 00:28:12.077 } 00:28:12.077 } 00:28:12.077 ], 00:28:12.077 "mp_policy": "active_passive" 00:28:12.077 } 00:28:12.077 } 00:28:12.077 ] 00:28:12.077 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.077 19:01:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:12.077 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.077 19:01:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.077 [2024-07-25 19:01:23.866254] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:12.077 [2024-07-25 19:01:23.866346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2694b90 (9): Bad file descriptor 00:28:12.364 [2024-07-25 19:01:24.008215] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.364 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.364 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:12.364 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.364 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.364 [ 00:28:12.364 { 00:28:12.364 "name": "nvme0n1", 00:28:12.364 "aliases": [ 00:28:12.364 "2e8d451a-394e-4b57-988e-542efdc3be7d" 00:28:12.364 ], 00:28:12.364 "product_name": "NVMe disk", 00:28:12.364 "block_size": 512, 00:28:12.364 "num_blocks": 2097152, 00:28:12.364 "uuid": "2e8d451a-394e-4b57-988e-542efdc3be7d", 00:28:12.364 "assigned_rate_limits": { 00:28:12.364 "rw_ios_per_sec": 0, 00:28:12.364 "rw_mbytes_per_sec": 0, 00:28:12.364 "r_mbytes_per_sec": 0, 00:28:12.364 "w_mbytes_per_sec": 0 00:28:12.364 }, 00:28:12.364 "claimed": false, 00:28:12.364 "zoned": false, 00:28:12.364 "supported_io_types": { 00:28:12.364 "read": true, 00:28:12.364 "write": true, 00:28:12.364 "unmap": false, 00:28:12.364 "write_zeroes": true, 00:28:12.364 "flush": true, 00:28:12.364 "reset": true, 00:28:12.364 "compare": true, 00:28:12.364 "compare_and_write": true, 00:28:12.364 "abort": true, 00:28:12.364 "nvme_admin": true, 00:28:12.364 "nvme_io": true 00:28:12.364 }, 00:28:12.364 "memory_domains": [ 00:28:12.364 { 00:28:12.364 "dma_device_id": "system", 00:28:12.364 "dma_device_type": 1 00:28:12.364 } 00:28:12.364 ], 00:28:12.364 "driver_specific": { 00:28:12.364 "nvme": [ 00:28:12.364 { 00:28:12.364 "trid": { 00:28:12.364 "trtype": "TCP", 00:28:12.364 "adrfam": "IPv4", 00:28:12.364 "traddr": "10.0.0.2", 00:28:12.364 "trsvcid": "4420", 00:28:12.364 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:12.364 }, 00:28:12.364 "ctrlr_data": { 00:28:12.364 "cntlid": 2, 00:28:12.364 "vendor_id": "0x8086", 00:28:12.364 "model_number": "SPDK bdev Controller", 00:28:12.364 "serial_number": "00000000000000000000", 00:28:12.364 "firmware_revision": "24.05.1", 00:28:12.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:12.364 "oacs": { 00:28:12.364 "security": 0, 00:28:12.364 "format": 0, 00:28:12.364 "firmware": 0, 00:28:12.364 "ns_manage": 0 00:28:12.364 }, 00:28:12.364 "multi_ctrlr": true, 00:28:12.364 "ana_reporting": false 00:28:12.364 }, 00:28:12.364 "vs": { 00:28:12.364 "nvme_version": "1.3" 00:28:12.364 }, 00:28:12.364 "ns_data": { 00:28:12.364 "id": 1, 00:28:12.364 "can_share": true 00:28:12.364 } 00:28:12.364 } 00:28:12.364 ], 00:28:12.364 "mp_policy": "active_passive" 00:28:12.364 } 00:28:12.364 } 00:28:12.364 ] 00:28:12.364 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.364 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2VemP3mSR9 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2VemP3mSR9 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.365 [2024-07-25 19:01:24.062928] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:12.365 [2024-07-25 19:01:24.063132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2VemP3mSR9 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.365 [2024-07-25 19:01:24.070940] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2VemP3mSR9 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.365 [2024-07-25 19:01:24.078947] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:12.365 [2024-07-25 19:01:24.079010] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:12.365 nvme0n1 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.365 [ 00:28:12.365 { 00:28:12.365 "name": "nvme0n1", 00:28:12.365 "aliases": [ 00:28:12.365 "2e8d451a-394e-4b57-988e-542efdc3be7d" 00:28:12.365 ], 00:28:12.365 "product_name": "NVMe disk", 00:28:12.365 "block_size": 512, 00:28:12.365 "num_blocks": 2097152, 00:28:12.365 "uuid": "2e8d451a-394e-4b57-988e-542efdc3be7d", 00:28:12.365 "assigned_rate_limits": { 00:28:12.365 "rw_ios_per_sec": 0, 00:28:12.365 "rw_mbytes_per_sec": 0, 00:28:12.365 "r_mbytes_per_sec": 0, 00:28:12.365 "w_mbytes_per_sec": 0 00:28:12.365 }, 00:28:12.365 "claimed": false, 00:28:12.365 "zoned": false, 00:28:12.365 "supported_io_types": { 00:28:12.365 "read": true, 00:28:12.365 "write": true, 00:28:12.365 "unmap": false, 00:28:12.365 "write_zeroes": true, 00:28:12.365 "flush": true, 00:28:12.365 "reset": true, 00:28:12.365 "compare": true, 00:28:12.365 "compare_and_write": true, 00:28:12.365 "abort": true, 00:28:12.365 "nvme_admin": true, 00:28:12.365 "nvme_io": true 00:28:12.365 }, 00:28:12.365 "memory_domains": [ 00:28:12.365 { 00:28:12.365 "dma_device_id": "system", 00:28:12.365 "dma_device_type": 1 00:28:12.365 } 00:28:12.365 ], 00:28:12.365 "driver_specific": { 00:28:12.365 "nvme": [ 00:28:12.365 { 00:28:12.365 "trid": { 00:28:12.365 "trtype": "TCP", 00:28:12.365 "adrfam": "IPv4", 00:28:12.365 "traddr": "10.0.0.2", 00:28:12.365 "trsvcid": "4421", 00:28:12.365 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:12.365 }, 00:28:12.365 "ctrlr_data": { 00:28:12.365 "cntlid": 3, 00:28:12.365 "vendor_id": "0x8086", 00:28:12.365 "model_number": "SPDK bdev Controller", 00:28:12.365 "serial_number": "00000000000000000000", 00:28:12.365 "firmware_revision": "24.05.1", 00:28:12.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:12.365 "oacs": { 00:28:12.365 "security": 0, 00:28:12.365 "format": 0, 00:28:12.365 "firmware": 0, 00:28:12.365 "ns_manage": 0 00:28:12.365 }, 00:28:12.365 "multi_ctrlr": true, 00:28:12.365 "ana_reporting": false 00:28:12.365 }, 00:28:12.365 "vs": { 00:28:12.365 "nvme_version": "1.3" 00:28:12.365 }, 00:28:12.365 "ns_data": { 00:28:12.365 "id": 1, 00:28:12.365 "can_share": true 00:28:12.365 } 00:28:12.365 } 00:28:12.365 ], 00:28:12.365 "mp_policy": "active_passive" 00:28:12.365 } 00:28:12.365 } 00:28:12.365 ] 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.2VemP3mSR9 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:12.365 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:12.365 rmmod nvme_tcp 00:28:12.365 rmmod nvme_fabrics 00:28:12.365 rmmod nvme_keyring 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3627100 ']' 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3627100 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3627100 ']' 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3627100 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3627100 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3627100' 00:28:12.627 killing process with pid 3627100 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3627100 00:28:12.627 [2024-07-25 19:01:24.275197] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:12.627 [2024-07-25 19:01:24.275234] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3627100 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.627 19:01:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.162 19:01:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.162 00:28:15.162 real 0m5.512s 00:28:15.162 user 0m2.105s 00:28:15.162 sys 0m1.788s 00:28:15.162 19:01:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:15.162 19:01:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:15.162 ************************************ 00:28:15.162 END TEST nvmf_async_init 00:28:15.162 ************************************ 00:28:15.162 19:01:26 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:15.162 19:01:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:15.162 19:01:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:15.162 19:01:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.162 ************************************ 00:28:15.162 START TEST dma 00:28:15.162 ************************************ 00:28:15.162 19:01:26 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:15.162 * Looking for test storage... 00:28:15.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.162 19:01:26 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.162 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.163 19:01:26 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.163 19:01:26 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.163 19:01:26 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.163 19:01:26 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.163 19:01:26 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.163 19:01:26 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.163 19:01:26 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:15.163 19:01:26 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.163 19:01:26 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.163 19:01:26 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:15.163 19:01:26 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:15.163 00:28:15.163 real 0m0.060s 00:28:15.163 user 0m0.021s 00:28:15.163 sys 0m0.045s 00:28:15.163 19:01:26 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:15.163 19:01:26 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:15.163 ************************************ 00:28:15.163 END TEST dma 00:28:15.163 ************************************ 00:28:15.163 19:01:26 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:15.163 19:01:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:15.163 19:01:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:15.163 19:01:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.163 ************************************ 00:28:15.163 START TEST nvmf_identify 00:28:15.163 ************************************ 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:15.163 * Looking for test storage... 00:28:15.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.163 19:01:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.071 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:28:17.071 00:28:17.071 --- 10.0.0.2 ping statistics --- 00:28:17.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.071 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:17.071 00:28:17.071 --- 10.0.0.1 ping statistics --- 00:28:17.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.071 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.071 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3629221 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3629221 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3629221 ']' 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:17.072 19:01:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.072 [2024-07-25 19:01:28.742324] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:17.072 [2024-07-25 19:01:28.742414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.072 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.072 [2024-07-25 19:01:28.811235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.072 [2024-07-25 19:01:28.904099] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.072 [2024-07-25 19:01:28.904152] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.072 [2024-07-25 19:01:28.904181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.072 [2024-07-25 19:01:28.904193] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.072 [2024-07-25 19:01:28.904203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.072 [2024-07-25 19:01:28.907080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.072 [2024-07-25 19:01:28.907152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.072 [2024-07-25 19:01:28.907220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.072 [2024-07-25 19:01:28.907223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.332 [2024-07-25 19:01:29.040847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.332 Malloc0 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.332 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.333 [2024-07-25 19:01:29.118222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.333 [ 00:28:17.333 { 00:28:17.333 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:17.333 "subtype": "Discovery", 00:28:17.333 "listen_addresses": [ 00:28:17.333 { 00:28:17.333 "trtype": "TCP", 00:28:17.333 "adrfam": "IPv4", 00:28:17.333 "traddr": "10.0.0.2", 00:28:17.333 "trsvcid": "4420" 00:28:17.333 } 00:28:17.333 ], 00:28:17.333 "allow_any_host": true, 00:28:17.333 "hosts": [] 00:28:17.333 }, 00:28:17.333 { 00:28:17.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.333 "subtype": "NVMe", 00:28:17.333 "listen_addresses": [ 00:28:17.333 { 00:28:17.333 "trtype": "TCP", 00:28:17.333 "adrfam": "IPv4", 00:28:17.333 "traddr": "10.0.0.2", 00:28:17.333 "trsvcid": "4420" 00:28:17.333 } 00:28:17.333 ], 00:28:17.333 "allow_any_host": true, 00:28:17.333 "hosts": [], 00:28:17.333 "serial_number": "SPDK00000000000001", 00:28:17.333 "model_number": "SPDK bdev Controller", 00:28:17.333 "max_namespaces": 32, 00:28:17.333 "min_cntlid": 1, 00:28:17.333 "max_cntlid": 65519, 00:28:17.333 "namespaces": [ 00:28:17.333 { 00:28:17.333 "nsid": 1, 00:28:17.333 "bdev_name": "Malloc0", 00:28:17.333 "name": "Malloc0", 00:28:17.333 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:17.333 "eui64": "ABCDEF0123456789", 00:28:17.333 "uuid": "721ebdd9-c67d-4add-b803-6401369efaff" 00:28:17.333 } 00:28:17.333 ] 00:28:17.333 } 00:28:17.333 ] 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.333 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:17.333 [2024-07-25 19:01:29.158491] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:17.333 [2024-07-25 19:01:29.158533] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629250 ] 00:28:17.333 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.333 [2024-07-25 19:01:29.192702] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:17.333 [2024-07-25 19:01:29.192772] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:17.333 [2024-07-25 19:01:29.192782] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:17.333 [2024-07-25 19:01:29.192797] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:17.333 [2024-07-25 19:01:29.192810] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:17.333 [2024-07-25 19:01:29.196105] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:17.333 [2024-07-25 19:01:29.196169] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2130120 0 00:28:17.333 [2024-07-25 19:01:29.203073] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:17.333 [2024-07-25 19:01:29.203096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:17.333 [2024-07-25 19:01:29.203104] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:17.333 [2024-07-25 19:01:29.203110] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:17.333 [2024-07-25 19:01:29.203159] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.333 [2024-07-25 19:01:29.203172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.333 [2024-07-25 19:01:29.203180] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.333 [2024-07-25 19:01:29.203198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:17.333 [2024-07-25 19:01:29.203226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.596 [2024-07-25 19:01:29.210073] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.596 [2024-07-25 19:01:29.210093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.596 [2024-07-25 19:01:29.210100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210109] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.596 [2024-07-25 19:01:29.210131] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:17.596 [2024-07-25 19:01:29.210144] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:17.596 [2024-07-25 19:01:29.210154] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:17.596 [2024-07-25 19:01:29.210179] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210189] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210196] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.596 [2024-07-25 19:01:29.210208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.596 [2024-07-25 19:01:29.210232] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.596 [2024-07-25 19:01:29.210343] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.596 [2024-07-25 19:01:29.210355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.596 [2024-07-25 19:01:29.210362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210369] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.596 [2024-07-25 19:01:29.210383] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:17.596 [2024-07-25 19:01:29.210398] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:17.596 [2024-07-25 19:01:29.210410] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210418] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210429] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.596 [2024-07-25 19:01:29.210441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.596 [2024-07-25 19:01:29.210462] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.596 [2024-07-25 19:01:29.210566] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.596 [2024-07-25 19:01:29.210580] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.596 [2024-07-25 19:01:29.210587] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210594] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.596 [2024-07-25 19:01:29.210605] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:17.596 [2024-07-25 19:01:29.210620] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:17.596 [2024-07-25 19:01:29.210632] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210646] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.596 [2024-07-25 19:01:29.210657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.596 [2024-07-25 19:01:29.210678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.596 [2024-07-25 19:01:29.210787] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.596 [2024-07-25 19:01:29.210798] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.596 [2024-07-25 19:01:29.210805] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.596 [2024-07-25 19:01:29.210822] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:17.596 [2024-07-25 19:01:29.210839] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210847] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.210854] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.596 [2024-07-25 19:01:29.210865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.596 [2024-07-25 19:01:29.210885] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.596 [2024-07-25 19:01:29.210992] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.596 [2024-07-25 19:01:29.211003] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.596 [2024-07-25 19:01:29.211010] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.211017] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.596 [2024-07-25 19:01:29.211027] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:17.596 [2024-07-25 19:01:29.211036] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:17.596 [2024-07-25 19:01:29.211049] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:17.596 [2024-07-25 19:01:29.211159] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:17.596 [2024-07-25 19:01:29.211169] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:17.596 [2024-07-25 19:01:29.211188] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.211197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.211203] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.596 [2024-07-25 19:01:29.211214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.596 [2024-07-25 19:01:29.211236] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.596 [2024-07-25 19:01:29.211355] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.596 [2024-07-25 19:01:29.211370] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.596 [2024-07-25 19:01:29.211376] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.596 [2024-07-25 19:01:29.211383] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.596 [2024-07-25 19:01:29.211393] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:17.597 [2024-07-25 19:01:29.211410] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211419] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211426] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.211437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.597 [2024-07-25 19:01:29.211457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.597 [2024-07-25 19:01:29.211553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.597 [2024-07-25 19:01:29.211565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.597 [2024-07-25 19:01:29.211571] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211578] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.597 [2024-07-25 19:01:29.211588] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:17.597 [2024-07-25 19:01:29.211596] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:17.597 [2024-07-25 19:01:29.211610] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:17.597 [2024-07-25 19:01:29.211624] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:17.597 [2024-07-25 19:01:29.211641] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211650] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.211661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.597 [2024-07-25 19:01:29.211682] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.597 [2024-07-25 19:01:29.211823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.597 [2024-07-25 19:01:29.211838] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.597 [2024-07-25 19:01:29.211845] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211852] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2130120): datao=0, datal=4096, cccid=0 00:28:17.597 [2024-07-25 19:01:29.211860] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21891f0) on tqpair(0x2130120): expected_datao=0, payload_size=4096 00:28:17.597 [2024-07-25 19:01:29.211872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211893] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211903] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.597 [2024-07-25 19:01:29.211975] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.597 [2024-07-25 19:01:29.211982] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.211989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.597 [2024-07-25 19:01:29.212007] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:17.597 [2024-07-25 19:01:29.212017] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:17.597 [2024-07-25 19:01:29.212026] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:17.597 [2024-07-25 19:01:29.212034] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:17.597 [2024-07-25 19:01:29.212042] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:17.597 [2024-07-25 19:01:29.212051] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:17.597 [2024-07-25 19:01:29.212072] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:17.597 [2024-07-25 19:01:29.212086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212094] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.212112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.597 [2024-07-25 19:01:29.212134] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.597 [2024-07-25 19:01:29.212248] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.597 [2024-07-25 19:01:29.212263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.597 [2024-07-25 19:01:29.212270] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212277] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21891f0) on tqpair=0x2130120 00:28:17.597 [2024-07-25 19:01:29.212292] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212299] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212306] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.212316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.597 [2024-07-25 19:01:29.212327] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212334] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212341] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.212350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.597 [2024-07-25 19:01:29.212360] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212367] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212373] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.212387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.597 [2024-07-25 19:01:29.212398] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.212421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.597 [2024-07-25 19:01:29.212430] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:17.597 [2024-07-25 19:01:29.212449] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:17.597 [2024-07-25 19:01:29.212462] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212470] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.212495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.597 [2024-07-25 19:01:29.212518] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21891f0, cid 0, qid 0 00:28:17.597 [2024-07-25 19:01:29.212528] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189350, cid 1, qid 0 00:28:17.597 [2024-07-25 19:01:29.212536] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21894b0, cid 2, qid 0 00:28:17.597 [2024-07-25 19:01:29.212559] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.597 [2024-07-25 19:01:29.212567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189770, cid 4, qid 0 00:28:17.597 [2024-07-25 19:01:29.212695] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.597 [2024-07-25 19:01:29.212707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.597 [2024-07-25 19:01:29.212714] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189770) on tqpair=0x2130120 00:28:17.597 [2024-07-25 19:01:29.212732] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:17.597 [2024-07-25 19:01:29.212741] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:17.597 [2024-07-25 19:01:29.212759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212768] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2130120) 00:28:17.597 [2024-07-25 19:01:29.212779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.597 [2024-07-25 19:01:29.212799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189770, cid 4, qid 0 00:28:17.597 [2024-07-25 19:01:29.212919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.597 [2024-07-25 19:01:29.212933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.597 [2024-07-25 19:01:29.212940] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212947] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2130120): datao=0, datal=4096, cccid=4 00:28:17.597 [2024-07-25 19:01:29.212955] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2189770) on tqpair(0x2130120): expected_datao=0, payload_size=4096 00:28:17.597 [2024-07-25 19:01:29.212962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212979] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.212988] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.256076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.597 [2024-07-25 19:01:29.256097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.597 [2024-07-25 19:01:29.256105] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.256113] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189770) on tqpair=0x2130120 00:28:17.597 [2024-07-25 19:01:29.256135] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:17.597 [2024-07-25 19:01:29.256173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.597 [2024-07-25 19:01:29.256184] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2130120) 00:28:17.598 [2024-07-25 19:01:29.256196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.598 [2024-07-25 19:01:29.256209] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.256216] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.256223] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2130120) 00:28:17.598 [2024-07-25 19:01:29.256232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.598 [2024-07-25 19:01:29.256261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189770, cid 4, qid 0 00:28:17.598 [2024-07-25 19:01:29.256274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21898d0, cid 5, qid 0 00:28:17.598 [2024-07-25 19:01:29.256430] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.598 [2024-07-25 19:01:29.256442] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.598 [2024-07-25 19:01:29.256449] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.256456] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2130120): datao=0, datal=1024, cccid=4 00:28:17.598 [2024-07-25 19:01:29.256464] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2189770) on tqpair(0x2130120): expected_datao=0, payload_size=1024 00:28:17.598 [2024-07-25 19:01:29.256472] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.256482] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.256490] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.256499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.598 [2024-07-25 19:01:29.256508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.598 [2024-07-25 19:01:29.256515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.256522] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21898d0) on tqpair=0x2130120 00:28:17.598 [2024-07-25 19:01:29.298146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.598 [2024-07-25 19:01:29.298166] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.598 [2024-07-25 19:01:29.298173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298180] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189770) on tqpair=0x2130120 00:28:17.598 [2024-07-25 19:01:29.298205] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298215] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2130120) 00:28:17.598 [2024-07-25 19:01:29.298227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.598 [2024-07-25 19:01:29.298258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189770, cid 4, qid 0 00:28:17.598 [2024-07-25 19:01:29.298393] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.598 [2024-07-25 19:01:29.298409] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.598 [2024-07-25 19:01:29.298422] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298430] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2130120): datao=0, datal=3072, cccid=4 00:28:17.598 [2024-07-25 19:01:29.298438] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2189770) on tqpair(0x2130120): expected_datao=0, payload_size=3072 00:28:17.598 [2024-07-25 19:01:29.298446] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298456] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298464] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.598 [2024-07-25 19:01:29.298486] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.598 [2024-07-25 19:01:29.298493] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298500] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189770) on tqpair=0x2130120 00:28:17.598 [2024-07-25 19:01:29.298515] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298524] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2130120) 00:28:17.598 [2024-07-25 19:01:29.298535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.598 [2024-07-25 19:01:29.298563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189770, cid 4, qid 0 00:28:17.598 [2024-07-25 19:01:29.298692] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.598 [2024-07-25 19:01:29.298707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.598 [2024-07-25 19:01:29.298714] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298721] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2130120): datao=0, datal=8, cccid=4 00:28:17.598 [2024-07-25 19:01:29.298729] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2189770) on tqpair(0x2130120): expected_datao=0, payload_size=8 00:28:17.598 [2024-07-25 19:01:29.298736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298746] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.298754] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.344082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.598 [2024-07-25 19:01:29.344101] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.598 [2024-07-25 19:01:29.344108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.598 [2024-07-25 19:01:29.344115] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189770) on tqpair=0x2130120 00:28:17.598 ===================================================== 00:28:17.598 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:17.598 ===================================================== 00:28:17.598 Controller Capabilities/Features 00:28:17.598 ================================ 00:28:17.598 Vendor ID: 0000 00:28:17.598 Subsystem Vendor ID: 0000 00:28:17.598 Serial Number: .................... 00:28:17.598 Model Number: ........................................ 00:28:17.598 Firmware Version: 24.05.1 00:28:17.598 Recommended Arb Burst: 0 00:28:17.598 IEEE OUI Identifier: 00 00 00 00:28:17.598 Multi-path I/O 00:28:17.598 May have multiple subsystem ports: No 00:28:17.598 May have multiple controllers: No 00:28:17.598 Associated with SR-IOV VF: No 00:28:17.598 Max Data Transfer Size: 131072 00:28:17.598 Max Number of Namespaces: 0 00:28:17.598 Max Number of I/O Queues: 1024 00:28:17.598 NVMe Specification Version (VS): 1.3 00:28:17.598 NVMe Specification Version (Identify): 1.3 00:28:17.598 Maximum Queue Entries: 128 00:28:17.598 Contiguous Queues Required: Yes 00:28:17.598 Arbitration Mechanisms Supported 00:28:17.598 Weighted Round Robin: Not Supported 00:28:17.598 Vendor Specific: Not Supported 00:28:17.598 Reset Timeout: 15000 ms 00:28:17.598 Doorbell Stride: 4 bytes 00:28:17.598 NVM Subsystem Reset: Not Supported 00:28:17.598 Command Sets Supported 00:28:17.598 NVM Command Set: Supported 00:28:17.598 Boot Partition: Not Supported 00:28:17.598 Memory Page Size Minimum: 4096 bytes 00:28:17.598 Memory Page Size Maximum: 4096 bytes 00:28:17.598 Persistent Memory Region: Not Supported 00:28:17.598 Optional Asynchronous Events Supported 00:28:17.598 Namespace Attribute Notices: Not Supported 00:28:17.598 Firmware Activation Notices: Not Supported 00:28:17.598 ANA Change Notices: Not Supported 00:28:17.598 PLE Aggregate Log Change Notices: Not Supported 00:28:17.598 LBA Status Info Alert Notices: Not Supported 00:28:17.598 EGE Aggregate Log Change Notices: Not Supported 00:28:17.598 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.598 Zone Descriptor Change Notices: Not Supported 00:28:17.598 Discovery Log Change Notices: Supported 00:28:17.598 Controller Attributes 00:28:17.598 128-bit Host Identifier: Not Supported 00:28:17.598 Non-Operational Permissive Mode: Not Supported 00:28:17.598 NVM Sets: Not Supported 00:28:17.598 Read Recovery Levels: Not Supported 00:28:17.598 Endurance Groups: Not Supported 00:28:17.598 Predictable Latency Mode: Not Supported 00:28:17.598 Traffic Based Keep ALive: Not Supported 00:28:17.598 Namespace Granularity: Not Supported 00:28:17.598 SQ Associations: Not Supported 00:28:17.598 UUID List: Not Supported 00:28:17.598 Multi-Domain Subsystem: Not Supported 00:28:17.598 Fixed Capacity Management: Not Supported 00:28:17.598 Variable Capacity Management: Not Supported 00:28:17.598 Delete Endurance Group: Not Supported 00:28:17.598 Delete NVM Set: Not Supported 00:28:17.598 Extended LBA Formats Supported: Not Supported 00:28:17.598 Flexible Data Placement Supported: Not Supported 00:28:17.598 00:28:17.598 Controller Memory Buffer Support 00:28:17.598 ================================ 00:28:17.598 Supported: No 00:28:17.598 00:28:17.598 Persistent Memory Region Support 00:28:17.598 ================================ 00:28:17.598 Supported: No 00:28:17.598 00:28:17.598 Admin Command Set Attributes 00:28:17.598 ============================ 00:28:17.598 Security Send/Receive: Not Supported 00:28:17.598 Format NVM: Not Supported 00:28:17.598 Firmware Activate/Download: Not Supported 00:28:17.598 Namespace Management: Not Supported 00:28:17.598 Device Self-Test: Not Supported 00:28:17.598 Directives: Not Supported 00:28:17.598 NVMe-MI: Not Supported 00:28:17.598 Virtualization Management: Not Supported 00:28:17.598 Doorbell Buffer Config: Not Supported 00:28:17.598 Get LBA Status Capability: Not Supported 00:28:17.598 Command & Feature Lockdown Capability: Not Supported 00:28:17.598 Abort Command Limit: 1 00:28:17.598 Async Event Request Limit: 4 00:28:17.598 Number of Firmware Slots: N/A 00:28:17.598 Firmware Slot 1 Read-Only: N/A 00:28:17.599 Firmware Activation Without Reset: N/A 00:28:17.599 Multiple Update Detection Support: N/A 00:28:17.599 Firmware Update Granularity: No Information Provided 00:28:17.599 Per-Namespace SMART Log: No 00:28:17.599 Asymmetric Namespace Access Log Page: Not Supported 00:28:17.599 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:17.599 Command Effects Log Page: Not Supported 00:28:17.599 Get Log Page Extended Data: Supported 00:28:17.599 Telemetry Log Pages: Not Supported 00:28:17.599 Persistent Event Log Pages: Not Supported 00:28:17.599 Supported Log Pages Log Page: May Support 00:28:17.599 Commands Supported & Effects Log Page: Not Supported 00:28:17.599 Feature Identifiers & Effects Log Page:May Support 00:28:17.599 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.599 Data Area 4 for Telemetry Log: Not Supported 00:28:17.599 Error Log Page Entries Supported: 128 00:28:17.599 Keep Alive: Not Supported 00:28:17.599 00:28:17.599 NVM Command Set Attributes 00:28:17.599 ========================== 00:28:17.599 Submission Queue Entry Size 00:28:17.599 Max: 1 00:28:17.599 Min: 1 00:28:17.599 Completion Queue Entry Size 00:28:17.599 Max: 1 00:28:17.599 Min: 1 00:28:17.599 Number of Namespaces: 0 00:28:17.599 Compare Command: Not Supported 00:28:17.599 Write Uncorrectable Command: Not Supported 00:28:17.599 Dataset Management Command: Not Supported 00:28:17.599 Write Zeroes Command: Not Supported 00:28:17.599 Set Features Save Field: Not Supported 00:28:17.599 Reservations: Not Supported 00:28:17.599 Timestamp: Not Supported 00:28:17.599 Copy: Not Supported 00:28:17.599 Volatile Write Cache: Not Present 00:28:17.599 Atomic Write Unit (Normal): 1 00:28:17.599 Atomic Write Unit (PFail): 1 00:28:17.599 Atomic Compare & Write Unit: 1 00:28:17.599 Fused Compare & Write: Supported 00:28:17.599 Scatter-Gather List 00:28:17.599 SGL Command Set: Supported 00:28:17.599 SGL Keyed: Supported 00:28:17.599 SGL Bit Bucket Descriptor: Not Supported 00:28:17.599 SGL Metadata Pointer: Not Supported 00:28:17.599 Oversized SGL: Not Supported 00:28:17.599 SGL Metadata Address: Not Supported 00:28:17.599 SGL Offset: Supported 00:28:17.599 Transport SGL Data Block: Not Supported 00:28:17.599 Replay Protected Memory Block: Not Supported 00:28:17.599 00:28:17.599 Firmware Slot Information 00:28:17.599 ========================= 00:28:17.599 Active slot: 0 00:28:17.599 00:28:17.599 00:28:17.599 Error Log 00:28:17.599 ========= 00:28:17.599 00:28:17.599 Active Namespaces 00:28:17.599 ================= 00:28:17.599 Discovery Log Page 00:28:17.599 ================== 00:28:17.599 Generation Counter: 2 00:28:17.599 Number of Records: 2 00:28:17.599 Record Format: 0 00:28:17.599 00:28:17.599 Discovery Log Entry 0 00:28:17.599 ---------------------- 00:28:17.599 Transport Type: 3 (TCP) 00:28:17.599 Address Family: 1 (IPv4) 00:28:17.599 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:17.599 Entry Flags: 00:28:17.599 Duplicate Returned Information: 1 00:28:17.599 Explicit Persistent Connection Support for Discovery: 1 00:28:17.599 Transport Requirements: 00:28:17.599 Secure Channel: Not Required 00:28:17.599 Port ID: 0 (0x0000) 00:28:17.599 Controller ID: 65535 (0xffff) 00:28:17.599 Admin Max SQ Size: 128 00:28:17.599 Transport Service Identifier: 4420 00:28:17.599 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:17.599 Transport Address: 10.0.0.2 00:28:17.599 Discovery Log Entry 1 00:28:17.599 ---------------------- 00:28:17.599 Transport Type: 3 (TCP) 00:28:17.599 Address Family: 1 (IPv4) 00:28:17.599 Subsystem Type: 2 (NVM Subsystem) 00:28:17.599 Entry Flags: 00:28:17.599 Duplicate Returned Information: 0 00:28:17.599 Explicit Persistent Connection Support for Discovery: 0 00:28:17.599 Transport Requirements: 00:28:17.599 Secure Channel: Not Required 00:28:17.599 Port ID: 0 (0x0000) 00:28:17.599 Controller ID: 65535 (0xffff) 00:28:17.599 Admin Max SQ Size: 128 00:28:17.599 Transport Service Identifier: 4420 00:28:17.599 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:17.599 Transport Address: 10.0.0.2 [2024-07-25 19:01:29.344232] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:17.599 [2024-07-25 19:01:29.344256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.599 [2024-07-25 19:01:29.344269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.599 [2024-07-25 19:01:29.344279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.599 [2024-07-25 19:01:29.344290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.599 [2024-07-25 19:01:29.344309] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344320] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.599 [2024-07-25 19:01:29.344338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.599 [2024-07-25 19:01:29.344372] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.599 [2024-07-25 19:01:29.344492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.599 [2024-07-25 19:01:29.344508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.599 [2024-07-25 19:01:29.344515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344522] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.599 [2024-07-25 19:01:29.344536] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344544] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.599 [2024-07-25 19:01:29.344561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.599 [2024-07-25 19:01:29.344589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.599 [2024-07-25 19:01:29.344710] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.599 [2024-07-25 19:01:29.344725] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.599 [2024-07-25 19:01:29.344732] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344739] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.599 [2024-07-25 19:01:29.344749] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:17.599 [2024-07-25 19:01:29.344757] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:17.599 [2024-07-25 19:01:29.344774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344783] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.599 [2024-07-25 19:01:29.344800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.599 [2024-07-25 19:01:29.344821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.599 [2024-07-25 19:01:29.344920] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.599 [2024-07-25 19:01:29.344933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.599 [2024-07-25 19:01:29.344940] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344947] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.599 [2024-07-25 19:01:29.344965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.344981] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.599 [2024-07-25 19:01:29.344992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.599 [2024-07-25 19:01:29.345012] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.599 [2024-07-25 19:01:29.345118] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.599 [2024-07-25 19:01:29.345134] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.599 [2024-07-25 19:01:29.345142] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.345149] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.599 [2024-07-25 19:01:29.345168] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.345179] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.599 [2024-07-25 19:01:29.345191] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.599 [2024-07-25 19:01:29.345202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.599 [2024-07-25 19:01:29.345224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.599 [2024-07-25 19:01:29.345334] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.599 [2024-07-25 19:01:29.345355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.345365] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.345392] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345402] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345408] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.345419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.345441] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.345543] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.345556] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.345563] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345570] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.345589] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345599] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345606] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.345616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.345638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.345736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.345749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.345757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.345781] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345791] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345797] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.345809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.345830] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.345927] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.345941] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.345948] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345955] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.345974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345984] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.345991] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.346006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.346028] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.346139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.346155] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.346162] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346169] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.346187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346203] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.346214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.346235] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.346338] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.346353] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.346359] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346366] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.346384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346394] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346401] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.346412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.346432] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.346527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.346539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.346546] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346553] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.346571] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346580] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346587] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.346597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.346617] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.346713] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.346728] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.346735] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346742] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.346760] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346769] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346776] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.346787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.346812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.346907] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.346919] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.346926] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346933] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.346951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346960] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.346967] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.346977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.346998] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.347107] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.347122] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.347129] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.347136] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.347155] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.347164] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.347171] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.347182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.347203] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.347302] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.600 [2024-07-25 19:01:29.347315] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.600 [2024-07-25 19:01:29.347322] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.347328] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.600 [2024-07-25 19:01:29.347346] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.347355] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.600 [2024-07-25 19:01:29.347362] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.600 [2024-07-25 19:01:29.347373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.600 [2024-07-25 19:01:29.347393] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.600 [2024-07-25 19:01:29.347489] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.601 [2024-07-25 19:01:29.347504] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.601 [2024-07-25 19:01:29.347511] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347518] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.601 [2024-07-25 19:01:29.347535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347552] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.601 [2024-07-25 19:01:29.347563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.601 [2024-07-25 19:01:29.347590] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.601 [2024-07-25 19:01:29.347687] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.601 [2024-07-25 19:01:29.347699] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.601 [2024-07-25 19:01:29.347706] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347713] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.601 [2024-07-25 19:01:29.347731] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347747] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.601 [2024-07-25 19:01:29.347757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.601 [2024-07-25 19:01:29.347778] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.601 [2024-07-25 19:01:29.347875] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.601 [2024-07-25 19:01:29.347889] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.601 [2024-07-25 19:01:29.347896] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347903] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.601 [2024-07-25 19:01:29.347921] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347931] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.347937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.601 [2024-07-25 19:01:29.347948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.601 [2024-07-25 19:01:29.347969] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.601 [2024-07-25 19:01:29.352083] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.601 [2024-07-25 19:01:29.352111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.601 [2024-07-25 19:01:29.352119] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.352126] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.601 [2024-07-25 19:01:29.352146] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.352156] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.352163] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2130120) 00:28:17.601 [2024-07-25 19:01:29.352174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.601 [2024-07-25 19:01:29.352196] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2189610, cid 3, qid 0 00:28:17.601 [2024-07-25 19:01:29.352303] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.601 [2024-07-25 19:01:29.352315] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.601 [2024-07-25 19:01:29.352323] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.352329] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2189610) on tqpair=0x2130120 00:28:17.601 [2024-07-25 19:01:29.352345] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:28:17.601 00:28:17.601 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:17.601 [2024-07-25 19:01:29.387129] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:17.601 [2024-07-25 19:01:29.387181] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629276 ] 00:28:17.601 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.601 [2024-07-25 19:01:29.425865] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:17.601 [2024-07-25 19:01:29.425914] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:17.601 [2024-07-25 19:01:29.425924] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:17.601 [2024-07-25 19:01:29.425938] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:17.601 [2024-07-25 19:01:29.425950] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:17.601 [2024-07-25 19:01:29.426168] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:17.601 [2024-07-25 19:01:29.426213] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d33120 0 00:28:17.601 [2024-07-25 19:01:29.441079] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:17.601 [2024-07-25 19:01:29.441106] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:17.601 [2024-07-25 19:01:29.441114] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:17.601 [2024-07-25 19:01:29.441120] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:17.601 [2024-07-25 19:01:29.441157] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.441170] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.441177] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.601 [2024-07-25 19:01:29.441191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:17.601 [2024-07-25 19:01:29.441217] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.601 [2024-07-25 19:01:29.449085] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.601 [2024-07-25 19:01:29.449105] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.601 [2024-07-25 19:01:29.449113] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.449121] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.601 [2024-07-25 19:01:29.449141] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:17.601 [2024-07-25 19:01:29.449156] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:17.601 [2024-07-25 19:01:29.449165] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:17.601 [2024-07-25 19:01:29.449186] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.449195] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.449201] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.601 [2024-07-25 19:01:29.449213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.601 [2024-07-25 19:01:29.449237] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.601 [2024-07-25 19:01:29.449386] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.601 [2024-07-25 19:01:29.449402] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.601 [2024-07-25 19:01:29.449410] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.449421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.601 [2024-07-25 19:01:29.449435] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:17.601 [2024-07-25 19:01:29.449451] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:17.601 [2024-07-25 19:01:29.449467] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.449475] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.449481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.601 [2024-07-25 19:01:29.449492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.601 [2024-07-25 19:01:29.449515] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.601 [2024-07-25 19:01:29.449608] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.601 [2024-07-25 19:01:29.449624] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.601 [2024-07-25 19:01:29.449632] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.601 [2024-07-25 19:01:29.449641] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.602 [2024-07-25 19:01:29.449652] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:17.602 [2024-07-25 19:01:29.449667] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:17.602 [2024-07-25 19:01:29.449680] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.449691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.449698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.449709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.602 [2024-07-25 19:01:29.449731] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.602 [2024-07-25 19:01:29.449833] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.602 [2024-07-25 19:01:29.449849] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.602 [2024-07-25 19:01:29.449856] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.449863] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.602 [2024-07-25 19:01:29.449873] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:17.602 [2024-07-25 19:01:29.449892] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.449903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.449909] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.449920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.602 [2024-07-25 19:01:29.449942] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.602 [2024-07-25 19:01:29.450043] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.602 [2024-07-25 19:01:29.450068] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.602 [2024-07-25 19:01:29.450079] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450086] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.602 [2024-07-25 19:01:29.450096] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:17.602 [2024-07-25 19:01:29.450109] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:17.602 [2024-07-25 19:01:29.450125] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:17.602 [2024-07-25 19:01:29.450238] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:17.602 [2024-07-25 19:01:29.450245] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:17.602 [2024-07-25 19:01:29.450259] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.450283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.602 [2024-07-25 19:01:29.450304] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.602 [2024-07-25 19:01:29.450433] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.602 [2024-07-25 19:01:29.450449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.602 [2024-07-25 19:01:29.450457] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450466] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.602 [2024-07-25 19:01:29.450477] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:17.602 [2024-07-25 19:01:29.450495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450513] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.450525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.602 [2024-07-25 19:01:29.450547] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.602 [2024-07-25 19:01:29.450648] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.602 [2024-07-25 19:01:29.450664] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.602 [2024-07-25 19:01:29.450672] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450681] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.602 [2024-07-25 19:01:29.450691] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:17.602 [2024-07-25 19:01:29.450699] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:17.602 [2024-07-25 19:01:29.450713] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:17.602 [2024-07-25 19:01:29.450733] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:17.602 [2024-07-25 19:01:29.450751] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450760] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.450771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.602 [2024-07-25 19:01:29.450793] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.602 [2024-07-25 19:01:29.450928] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.602 [2024-07-25 19:01:29.450947] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.602 [2024-07-25 19:01:29.450955] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.450963] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d33120): datao=0, datal=4096, cccid=0 00:28:17.602 [2024-07-25 19:01:29.450977] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c1f0) on tqpair(0x1d33120): expected_datao=0, payload_size=4096 00:28:17.602 [2024-07-25 19:01:29.450988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451014] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451026] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.602 [2024-07-25 19:01:29.451094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.602 [2024-07-25 19:01:29.451101] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451108] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.602 [2024-07-25 19:01:29.451125] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:17.602 [2024-07-25 19:01:29.451135] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:17.602 [2024-07-25 19:01:29.451143] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:17.602 [2024-07-25 19:01:29.451150] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:17.602 [2024-07-25 19:01:29.451157] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:17.602 [2024-07-25 19:01:29.451165] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:17.602 [2024-07-25 19:01:29.451181] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:17.602 [2024-07-25 19:01:29.451200] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451208] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451215] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.451226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.602 [2024-07-25 19:01:29.451249] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.602 [2024-07-25 19:01:29.451360] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.602 [2024-07-25 19:01:29.451376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.602 [2024-07-25 19:01:29.451383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c1f0) on tqpair=0x1d33120 00:28:17.602 [2024-07-25 19:01:29.451403] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451410] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451417] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.451427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.602 [2024-07-25 19:01:29.451437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451444] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451450] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.451459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.602 [2024-07-25 19:01:29.451473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451487] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d33120) 00:28:17.602 [2024-07-25 19:01:29.451496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.602 [2024-07-25 19:01:29.451506] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451513] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.602 [2024-07-25 19:01:29.451519] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.603 [2024-07-25 19:01:29.451545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.603 [2024-07-25 19:01:29.451554] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:17.603 [2024-07-25 19:01:29.451574] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:17.603 [2024-07-25 19:01:29.451588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.451610] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d33120) 00:28:17.603 [2024-07-25 19:01:29.451621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.603 [2024-07-25 19:01:29.451643] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c1f0, cid 0, qid 0 00:28:17.603 [2024-07-25 19:01:29.451653] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c350, cid 1, qid 0 00:28:17.603 [2024-07-25 19:01:29.451676] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c4b0, cid 2, qid 0 00:28:17.603 [2024-07-25 19:01:29.451683] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.603 [2024-07-25 19:01:29.451691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c770, cid 4, qid 0 00:28:17.603 [2024-07-25 19:01:29.451842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.603 [2024-07-25 19:01:29.451858] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.603 [2024-07-25 19:01:29.451865] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.451875] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c770) on tqpair=0x1d33120 00:28:17.603 [2024-07-25 19:01:29.451885] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:17.603 [2024-07-25 19:01:29.451894] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:17.603 [2024-07-25 19:01:29.451909] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:17.603 [2024-07-25 19:01:29.451922] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:17.603 [2024-07-25 19:01:29.451935] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.451942] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.451949] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d33120) 00:28:17.603 [2024-07-25 19:01:29.451973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:17.603 [2024-07-25 19:01:29.451995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c770, cid 4, qid 0 00:28:17.603 [2024-07-25 19:01:29.452164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.603 [2024-07-25 19:01:29.452181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.603 [2024-07-25 19:01:29.452189] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.452199] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c770) on tqpair=0x1d33120 00:28:17.603 [2024-07-25 19:01:29.452269] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:17.603 [2024-07-25 19:01:29.452291] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:17.603 [2024-07-25 19:01:29.452307] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.452315] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d33120) 00:28:17.603 [2024-07-25 19:01:29.452326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.603 [2024-07-25 19:01:29.452364] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c770, cid 4, qid 0 00:28:17.603 [2024-07-25 19:01:29.452558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.603 [2024-07-25 19:01:29.452579] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.603 [2024-07-25 19:01:29.452591] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.452597] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d33120): datao=0, datal=4096, cccid=4 00:28:17.603 [2024-07-25 19:01:29.452605] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c770) on tqpair(0x1d33120): expected_datao=0, payload_size=4096 00:28:17.603 [2024-07-25 19:01:29.452613] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.452631] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.603 [2024-07-25 19:01:29.452641] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.495073] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.866 [2024-07-25 19:01:29.495093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.866 [2024-07-25 19:01:29.495101] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.495108] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c770) on tqpair=0x1d33120 00:28:17.866 [2024-07-25 19:01:29.495134] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:17.866 [2024-07-25 19:01:29.495150] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.495169] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.495186] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.495193] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d33120) 00:28:17.866 [2024-07-25 19:01:29.495204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.866 [2024-07-25 19:01:29.495228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c770, cid 4, qid 0 00:28:17.866 [2024-07-25 19:01:29.495400] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.866 [2024-07-25 19:01:29.495421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.866 [2024-07-25 19:01:29.495434] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.495441] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d33120): datao=0, datal=4096, cccid=4 00:28:17.866 [2024-07-25 19:01:29.495448] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c770) on tqpair(0x1d33120): expected_datao=0, payload_size=4096 00:28:17.866 [2024-07-25 19:01:29.495461] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.495481] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.495491] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.536213] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.866 [2024-07-25 19:01:29.536234] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.866 [2024-07-25 19:01:29.536242] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.536249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c770) on tqpair=0x1d33120 00:28:17.866 [2024-07-25 19:01:29.536271] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.536292] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.536309] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.536317] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d33120) 00:28:17.866 [2024-07-25 19:01:29.536329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.866 [2024-07-25 19:01:29.536353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c770, cid 4, qid 0 00:28:17.866 [2024-07-25 19:01:29.536484] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.866 [2024-07-25 19:01:29.536502] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.866 [2024-07-25 19:01:29.536511] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.536517] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d33120): datao=0, datal=4096, cccid=4 00:28:17.866 [2024-07-25 19:01:29.536525] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c770) on tqpair(0x1d33120): expected_datao=0, payload_size=4096 00:28:17.866 [2024-07-25 19:01:29.536533] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.536544] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.536552] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.581079] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.866 [2024-07-25 19:01:29.581099] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.866 [2024-07-25 19:01:29.581122] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.581129] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c770) on tqpair=0x1d33120 00:28:17.866 [2024-07-25 19:01:29.581144] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.581161] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.581183] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.581195] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.581203] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.581212] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:17.866 [2024-07-25 19:01:29.581219] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:17.866 [2024-07-25 19:01:29.581228] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:17.866 [2024-07-25 19:01:29.581255] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.581265] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d33120) 00:28:17.866 [2024-07-25 19:01:29.581277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.866 [2024-07-25 19:01:29.581288] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.581295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.866 [2024-07-25 19:01:29.581302] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d33120) 00:28:17.866 [2024-07-25 19:01:29.581311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.866 [2024-07-25 19:01:29.581338] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c770, cid 4, qid 0 00:28:17.866 [2024-07-25 19:01:29.581350] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c8d0, cid 5, qid 0 00:28:17.866 [2024-07-25 19:01:29.581452] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.581468] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.581475] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.581483] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c770) on tqpair=0x1d33120 00:28:17.867 [2024-07-25 19:01:29.581499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.581510] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.581516] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.581523] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c8d0) on tqpair=0x1d33120 00:28:17.867 [2024-07-25 19:01:29.581543] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.581554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d33120) 00:28:17.867 [2024-07-25 19:01:29.581565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.867 [2024-07-25 19:01:29.581587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c8d0, cid 5, qid 0 00:28:17.867 [2024-07-25 19:01:29.581704] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.581720] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.581728] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.581735] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c8d0) on tqpair=0x1d33120 00:28:17.867 [2024-07-25 19:01:29.581755] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.581766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d33120) 00:28:17.867 [2024-07-25 19:01:29.581777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.867 [2024-07-25 19:01:29.581798] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c8d0, cid 5, qid 0 00:28:17.867 [2024-07-25 19:01:29.581889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.581905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.581912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.581922] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c8d0) on tqpair=0x1d33120 00:28:17.867 [2024-07-25 19:01:29.581941] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.581950] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d33120) 00:28:17.867 [2024-07-25 19:01:29.581965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.867 [2024-07-25 19:01:29.581991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c8d0, cid 5, qid 0 00:28:17.867 [2024-07-25 19:01:29.582094] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.582111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.582118] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582125] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c8d0) on tqpair=0x1d33120 00:28:17.867 [2024-07-25 19:01:29.582148] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582160] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d33120) 00:28:17.867 [2024-07-25 19:01:29.582171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.867 [2024-07-25 19:01:29.582183] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582190] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d33120) 00:28:17.867 [2024-07-25 19:01:29.582199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.867 [2024-07-25 19:01:29.582210] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582217] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d33120) 00:28:17.867 [2024-07-25 19:01:29.582226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.867 [2024-07-25 19:01:29.582238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582245] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d33120) 00:28:17.867 [2024-07-25 19:01:29.582254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.867 [2024-07-25 19:01:29.582290] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c8d0, cid 5, qid 0 00:28:17.867 [2024-07-25 19:01:29.582301] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c770, cid 4, qid 0 00:28:17.867 [2024-07-25 19:01:29.582309] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8ca30, cid 6, qid 0 00:28:17.867 [2024-07-25 19:01:29.582316] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8cb90, cid 7, qid 0 00:28:17.867 [2024-07-25 19:01:29.582588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.867 [2024-07-25 19:01:29.582607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.867 [2024-07-25 19:01:29.582615] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582622] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d33120): datao=0, datal=8192, cccid=5 00:28:17.867 [2024-07-25 19:01:29.582632] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c8d0) on tqpair(0x1d33120): expected_datao=0, payload_size=8192 00:28:17.867 [2024-07-25 19:01:29.582644] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582661] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582672] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582681] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.867 [2024-07-25 19:01:29.582691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.867 [2024-07-25 19:01:29.582697] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582704] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d33120): datao=0, datal=512, cccid=4 00:28:17.867 [2024-07-25 19:01:29.582716] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8c770) on tqpair(0x1d33120): expected_datao=0, payload_size=512 00:28:17.867 [2024-07-25 19:01:29.582724] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582733] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582741] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.867 [2024-07-25 19:01:29.582759] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.867 [2024-07-25 19:01:29.582765] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582771] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d33120): datao=0, datal=512, cccid=6 00:28:17.867 [2024-07-25 19:01:29.582779] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8ca30) on tqpair(0x1d33120): expected_datao=0, payload_size=512 00:28:17.867 [2024-07-25 19:01:29.582787] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582796] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582803] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582812] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:17.867 [2024-07-25 19:01:29.582820] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:17.867 [2024-07-25 19:01:29.582827] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582833] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d33120): datao=0, datal=4096, cccid=7 00:28:17.867 [2024-07-25 19:01:29.582841] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8cb90) on tqpair(0x1d33120): expected_datao=0, payload_size=4096 00:28:17.867 [2024-07-25 19:01:29.582848] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582872] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582879] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582891] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.582900] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.582906] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582913] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c8d0) on tqpair=0x1d33120 00:28:17.867 [2024-07-25 19:01:29.582934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.582944] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.582951] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582957] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c770) on tqpair=0x1d33120 00:28:17.867 [2024-07-25 19:01:29.582971] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.582981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.582988] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.582994] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8ca30) on tqpair=0x1d33120 00:28:17.867 [2024-07-25 19:01:29.583008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.867 [2024-07-25 19:01:29.583018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.867 [2024-07-25 19:01:29.583025] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.867 [2024-07-25 19:01:29.583031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8cb90) on tqpair=0x1d33120 00:28:17.867 ===================================================== 00:28:17.867 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.867 ===================================================== 00:28:17.867 Controller Capabilities/Features 00:28:17.867 ================================ 00:28:17.867 Vendor ID: 8086 00:28:17.867 Subsystem Vendor ID: 8086 00:28:17.867 Serial Number: SPDK00000000000001 00:28:17.867 Model Number: SPDK bdev Controller 00:28:17.867 Firmware Version: 24.05.1 00:28:17.868 Recommended Arb Burst: 6 00:28:17.868 IEEE OUI Identifier: e4 d2 5c 00:28:17.868 Multi-path I/O 00:28:17.868 May have multiple subsystem ports: Yes 00:28:17.868 May have multiple controllers: Yes 00:28:17.868 Associated with SR-IOV VF: No 00:28:17.868 Max Data Transfer Size: 131072 00:28:17.868 Max Number of Namespaces: 32 00:28:17.868 Max Number of I/O Queues: 127 00:28:17.868 NVMe Specification Version (VS): 1.3 00:28:17.868 NVMe Specification Version (Identify): 1.3 00:28:17.868 Maximum Queue Entries: 128 00:28:17.868 Contiguous Queues Required: Yes 00:28:17.868 Arbitration Mechanisms Supported 00:28:17.868 Weighted Round Robin: Not Supported 00:28:17.868 Vendor Specific: Not Supported 00:28:17.868 Reset Timeout: 15000 ms 00:28:17.868 Doorbell Stride: 4 bytes 00:28:17.868 NVM Subsystem Reset: Not Supported 00:28:17.868 Command Sets Supported 00:28:17.868 NVM Command Set: Supported 00:28:17.868 Boot Partition: Not Supported 00:28:17.868 Memory Page Size Minimum: 4096 bytes 00:28:17.868 Memory Page Size Maximum: 4096 bytes 00:28:17.868 Persistent Memory Region: Not Supported 00:28:17.868 Optional Asynchronous Events Supported 00:28:17.868 Namespace Attribute Notices: Supported 00:28:17.868 Firmware Activation Notices: Not Supported 00:28:17.868 ANA Change Notices: Not Supported 00:28:17.868 PLE Aggregate Log Change Notices: Not Supported 00:28:17.868 LBA Status Info Alert Notices: Not Supported 00:28:17.868 EGE Aggregate Log Change Notices: Not Supported 00:28:17.868 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.868 Zone Descriptor Change Notices: Not Supported 00:28:17.868 Discovery Log Change Notices: Not Supported 00:28:17.868 Controller Attributes 00:28:17.868 128-bit Host Identifier: Supported 00:28:17.868 Non-Operational Permissive Mode: Not Supported 00:28:17.868 NVM Sets: Not Supported 00:28:17.868 Read Recovery Levels: Not Supported 00:28:17.868 Endurance Groups: Not Supported 00:28:17.868 Predictable Latency Mode: Not Supported 00:28:17.868 Traffic Based Keep ALive: Not Supported 00:28:17.868 Namespace Granularity: Not Supported 00:28:17.868 SQ Associations: Not Supported 00:28:17.868 UUID List: Not Supported 00:28:17.868 Multi-Domain Subsystem: Not Supported 00:28:17.868 Fixed Capacity Management: Not Supported 00:28:17.868 Variable Capacity Management: Not Supported 00:28:17.868 Delete Endurance Group: Not Supported 00:28:17.868 Delete NVM Set: Not Supported 00:28:17.868 Extended LBA Formats Supported: Not Supported 00:28:17.868 Flexible Data Placement Supported: Not Supported 00:28:17.868 00:28:17.868 Controller Memory Buffer Support 00:28:17.868 ================================ 00:28:17.868 Supported: No 00:28:17.868 00:28:17.868 Persistent Memory Region Support 00:28:17.868 ================================ 00:28:17.868 Supported: No 00:28:17.868 00:28:17.868 Admin Command Set Attributes 00:28:17.868 ============================ 00:28:17.868 Security Send/Receive: Not Supported 00:28:17.868 Format NVM: Not Supported 00:28:17.868 Firmware Activate/Download: Not Supported 00:28:17.868 Namespace Management: Not Supported 00:28:17.868 Device Self-Test: Not Supported 00:28:17.868 Directives: Not Supported 00:28:17.868 NVMe-MI: Not Supported 00:28:17.868 Virtualization Management: Not Supported 00:28:17.868 Doorbell Buffer Config: Not Supported 00:28:17.868 Get LBA Status Capability: Not Supported 00:28:17.868 Command & Feature Lockdown Capability: Not Supported 00:28:17.868 Abort Command Limit: 4 00:28:17.868 Async Event Request Limit: 4 00:28:17.868 Number of Firmware Slots: N/A 00:28:17.868 Firmware Slot 1 Read-Only: N/A 00:28:17.868 Firmware Activation Without Reset: N/A 00:28:17.868 Multiple Update Detection Support: N/A 00:28:17.868 Firmware Update Granularity: No Information Provided 00:28:17.868 Per-Namespace SMART Log: No 00:28:17.868 Asymmetric Namespace Access Log Page: Not Supported 00:28:17.868 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:17.868 Command Effects Log Page: Supported 00:28:17.868 Get Log Page Extended Data: Supported 00:28:17.868 Telemetry Log Pages: Not Supported 00:28:17.868 Persistent Event Log Pages: Not Supported 00:28:17.868 Supported Log Pages Log Page: May Support 00:28:17.868 Commands Supported & Effects Log Page: Not Supported 00:28:17.868 Feature Identifiers & Effects Log Page:May Support 00:28:17.868 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.868 Data Area 4 for Telemetry Log: Not Supported 00:28:17.868 Error Log Page Entries Supported: 128 00:28:17.868 Keep Alive: Supported 00:28:17.868 Keep Alive Granularity: 10000 ms 00:28:17.868 00:28:17.868 NVM Command Set Attributes 00:28:17.868 ========================== 00:28:17.868 Submission Queue Entry Size 00:28:17.868 Max: 64 00:28:17.868 Min: 64 00:28:17.868 Completion Queue Entry Size 00:28:17.868 Max: 16 00:28:17.868 Min: 16 00:28:17.868 Number of Namespaces: 32 00:28:17.868 Compare Command: Supported 00:28:17.868 Write Uncorrectable Command: Not Supported 00:28:17.868 Dataset Management Command: Supported 00:28:17.868 Write Zeroes Command: Supported 00:28:17.868 Set Features Save Field: Not Supported 00:28:17.868 Reservations: Supported 00:28:17.868 Timestamp: Not Supported 00:28:17.868 Copy: Supported 00:28:17.868 Volatile Write Cache: Present 00:28:17.868 Atomic Write Unit (Normal): 1 00:28:17.868 Atomic Write Unit (PFail): 1 00:28:17.868 Atomic Compare & Write Unit: 1 00:28:17.868 Fused Compare & Write: Supported 00:28:17.868 Scatter-Gather List 00:28:17.868 SGL Command Set: Supported 00:28:17.868 SGL Keyed: Supported 00:28:17.868 SGL Bit Bucket Descriptor: Not Supported 00:28:17.868 SGL Metadata Pointer: Not Supported 00:28:17.868 Oversized SGL: Not Supported 00:28:17.868 SGL Metadata Address: Not Supported 00:28:17.868 SGL Offset: Supported 00:28:17.868 Transport SGL Data Block: Not Supported 00:28:17.868 Replay Protected Memory Block: Not Supported 00:28:17.868 00:28:17.868 Firmware Slot Information 00:28:17.868 ========================= 00:28:17.868 Active slot: 1 00:28:17.868 Slot 1 Firmware Revision: 24.05.1 00:28:17.868 00:28:17.868 00:28:17.868 Commands Supported and Effects 00:28:17.868 ============================== 00:28:17.868 Admin Commands 00:28:17.868 -------------- 00:28:17.868 Get Log Page (02h): Supported 00:28:17.868 Identify (06h): Supported 00:28:17.868 Abort (08h): Supported 00:28:17.868 Set Features (09h): Supported 00:28:17.868 Get Features (0Ah): Supported 00:28:17.868 Asynchronous Event Request (0Ch): Supported 00:28:17.868 Keep Alive (18h): Supported 00:28:17.868 I/O Commands 00:28:17.868 ------------ 00:28:17.868 Flush (00h): Supported LBA-Change 00:28:17.868 Write (01h): Supported LBA-Change 00:28:17.868 Read (02h): Supported 00:28:17.868 Compare (05h): Supported 00:28:17.868 Write Zeroes (08h): Supported LBA-Change 00:28:17.868 Dataset Management (09h): Supported LBA-Change 00:28:17.868 Copy (19h): Supported LBA-Change 00:28:17.868 Unknown (79h): Supported LBA-Change 00:28:17.868 Unknown (7Ah): Supported 00:28:17.868 00:28:17.868 Error Log 00:28:17.868 ========= 00:28:17.868 00:28:17.868 Arbitration 00:28:17.868 =========== 00:28:17.868 Arbitration Burst: 1 00:28:17.868 00:28:17.868 Power Management 00:28:17.868 ================ 00:28:17.868 Number of Power States: 1 00:28:17.868 Current Power State: Power State #0 00:28:17.868 Power State #0: 00:28:17.868 Max Power: 0.00 W 00:28:17.868 Non-Operational State: Operational 00:28:17.868 Entry Latency: Not Reported 00:28:17.868 Exit Latency: Not Reported 00:28:17.868 Relative Read Throughput: 0 00:28:17.869 Relative Read Latency: 0 00:28:17.869 Relative Write Throughput: 0 00:28:17.869 Relative Write Latency: 0 00:28:17.869 Idle Power: Not Reported 00:28:17.869 Active Power: Not Reported 00:28:17.869 Non-Operational Permissive Mode: Not Supported 00:28:17.869 00:28:17.869 Health Information 00:28:17.869 ================== 00:28:17.869 Critical Warnings: 00:28:17.869 Available Spare Space: OK 00:28:17.869 Temperature: OK 00:28:17.869 Device Reliability: OK 00:28:17.869 Read Only: No 00:28:17.869 Volatile Memory Backup: OK 00:28:17.869 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:17.869 Temperature Threshold: [2024-07-25 19:01:29.583183] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.583195] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.583209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.583232] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8cb90, cid 7, qid 0 00:28:17.869 [2024-07-25 19:01:29.583382] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.583398] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.583405] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.583412] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8cb90) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.583457] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:17.869 [2024-07-25 19:01:29.583481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.869 [2024-07-25 19:01:29.583496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.869 [2024-07-25 19:01:29.583505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.869 [2024-07-25 19:01:29.583515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.869 [2024-07-25 19:01:29.583528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.583552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.583558] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.583569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.583590] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.869 [2024-07-25 19:01:29.583728] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.583744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.583751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.583761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.583775] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.583783] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.583789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.583800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.583828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.869 [2024-07-25 19:01:29.583944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.583960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.583967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.583974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.583983] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:17.869 [2024-07-25 19:01:29.583991] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:17.869 [2024-07-25 19:01:29.584010] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584020] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584027] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.584042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.584071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.869 [2024-07-25 19:01:29.584173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.584189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.584196] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584203] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.584223] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584234] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584241] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.584252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.584273] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.869 [2024-07-25 19:01:29.584369] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.584385] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.584392] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584399] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.584419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584430] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584437] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.584447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.584468] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.869 [2024-07-25 19:01:29.584567] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.584582] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.584592] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584600] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.584619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584628] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584636] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.584649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.584671] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.869 [2024-07-25 19:01:29.584758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.584773] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.584781] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.584810] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584820] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.584839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.584866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.869 [2024-07-25 19:01:29.584952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.584967] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.584974] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.584984] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.585004] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.585013] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.585020] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.585032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.869 [2024-07-25 19:01:29.585055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.869 [2024-07-25 19:01:29.589083] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.869 [2024-07-25 19:01:29.589095] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.869 [2024-07-25 19:01:29.589102] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.589109] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.869 [2024-07-25 19:01:29.589130] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.589141] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:17.869 [2024-07-25 19:01:29.589148] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d33120) 00:28:17.869 [2024-07-25 19:01:29.589158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.870 [2024-07-25 19:01:29.589180] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8c610, cid 3, qid 0 00:28:17.870 [2024-07-25 19:01:29.589324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:17.870 [2024-07-25 19:01:29.589340] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:17.870 [2024-07-25 19:01:29.589351] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:17.870 [2024-07-25 19:01:29.589358] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d8c610) on tqpair=0x1d33120 00:28:17.870 [2024-07-25 19:01:29.589373] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:17.870 0 Kelvin (-273 Celsius) 00:28:17.870 Available Spare: 0% 00:28:17.870 Available Spare Threshold: 0% 00:28:17.870 Life Percentage Used: 0% 00:28:17.870 Data Units Read: 0 00:28:17.870 Data Units Written: 0 00:28:17.870 Host Read Commands: 0 00:28:17.870 Host Write Commands: 0 00:28:17.870 Controller Busy Time: 0 minutes 00:28:17.870 Power Cycles: 0 00:28:17.870 Power On Hours: 0 hours 00:28:17.870 Unsafe Shutdowns: 0 00:28:17.870 Unrecoverable Media Errors: 0 00:28:17.870 Lifetime Error Log Entries: 0 00:28:17.870 Warning Temperature Time: 0 minutes 00:28:17.870 Critical Temperature Time: 0 minutes 00:28:17.870 00:28:17.870 Number of Queues 00:28:17.870 ================ 00:28:17.870 Number of I/O Submission Queues: 127 00:28:17.870 Number of I/O Completion Queues: 127 00:28:17.870 00:28:17.870 Active Namespaces 00:28:17.870 ================= 00:28:17.870 Namespace ID:1 00:28:17.870 Error Recovery Timeout: Unlimited 00:28:17.870 Command Set Identifier: NVM (00h) 00:28:17.870 Deallocate: Supported 00:28:17.870 Deallocated/Unwritten Error: Not Supported 00:28:17.870 Deallocated Read Value: Unknown 00:28:17.870 Deallocate in Write Zeroes: Not Supported 00:28:17.870 Deallocated Guard Field: 0xFFFF 00:28:17.870 Flush: Supported 00:28:17.870 Reservation: Supported 00:28:17.870 Namespace Sharing Capabilities: Multiple Controllers 00:28:17.870 Size (in LBAs): 131072 (0GiB) 00:28:17.870 Capacity (in LBAs): 131072 (0GiB) 00:28:17.870 Utilization (in LBAs): 131072 (0GiB) 00:28:17.870 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:17.870 EUI64: ABCDEF0123456789 00:28:17.870 UUID: 721ebdd9-c67d-4add-b803-6401369efaff 00:28:17.870 Thin Provisioning: Not Supported 00:28:17.870 Per-NS Atomic Units: Yes 00:28:17.870 Atomic Boundary Size (Normal): 0 00:28:17.870 Atomic Boundary Size (PFail): 0 00:28:17.870 Atomic Boundary Offset: 0 00:28:17.870 Maximum Single Source Range Length: 65535 00:28:17.870 Maximum Copy Length: 65535 00:28:17.870 Maximum Source Range Count: 1 00:28:17.870 NGUID/EUI64 Never Reused: No 00:28:17.870 Namespace Write Protected: No 00:28:17.870 Number of LBA Formats: 1 00:28:17.870 Current LBA Format: LBA Format #00 00:28:17.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:17.870 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.870 rmmod nvme_tcp 00:28:17.870 rmmod nvme_fabrics 00:28:17.870 rmmod nvme_keyring 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3629221 ']' 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3629221 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3629221 ']' 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3629221 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3629221 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3629221' 00:28:17.870 killing process with pid 3629221 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3629221 00:28:17.870 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3629221 00:28:18.129 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:18.129 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:18.129 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:18.129 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:18.129 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:18.129 19:01:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.129 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.129 19:01:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.666 19:01:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:20.666 00:28:20.666 real 0m5.316s 00:28:20.666 user 0m4.689s 00:28:20.666 sys 0m1.793s 00:28:20.666 19:01:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:20.666 19:01:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.666 ************************************ 00:28:20.666 END TEST nvmf_identify 00:28:20.666 ************************************ 00:28:20.666 19:01:32 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:20.666 19:01:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:20.666 19:01:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:20.666 19:01:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:20.666 ************************************ 00:28:20.666 START TEST nvmf_perf 00:28:20.666 ************************************ 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:20.666 * Looking for test storage... 00:28:20.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.666 19:01:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:20.667 19:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:22.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:22.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:22.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:22.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:28:22.570 00:28:22.570 --- 10.0.0.2 ping statistics --- 00:28:22.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.570 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:28:22.570 00:28:22.570 --- 10.0.0.1 ping statistics --- 00:28:22.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.570 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.570 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3631298 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3631298 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3631298 ']' 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.571 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:22.571 [2024-07-25 19:01:34.376778] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:22.571 [2024-07-25 19:01:34.376860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.571 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.571 [2024-07-25 19:01:34.442760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:22.828 [2024-07-25 19:01:34.529539] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.828 [2024-07-25 19:01:34.529593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.828 [2024-07-25 19:01:34.529617] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.828 [2024-07-25 19:01:34.529628] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.828 [2024-07-25 19:01:34.529638] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.828 [2024-07-25 19:01:34.529689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.828 [2024-07-25 19:01:34.529714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.828 [2024-07-25 19:01:34.529774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.828 [2024-07-25 19:01:34.529780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.828 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:22.828 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:22.828 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.828 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.828 19:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:22.828 19:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.828 19:01:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:22.828 19:01:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:26.113 19:01:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:26.113 19:01:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:26.371 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:26.371 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:26.629 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:26.629 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:26.629 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:26.629 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:26.629 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:26.886 [2024-07-25 19:01:38.511921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.886 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:27.145 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:27.145 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:27.403 19:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:27.403 19:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:27.403 19:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.661 [2024-07-25 19:01:39.491496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.661 19:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:27.919 19:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:27.919 19:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:27.919 19:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:27.919 19:01:39 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:29.294 Initializing NVMe Controllers 00:28:29.294 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:29.294 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:29.294 Initialization complete. Launching workers. 00:28:29.294 ======================================================== 00:28:29.294 Latency(us) 00:28:29.294 Device Information : IOPS MiB/s Average min max 00:28:29.294 PCIE (0000:88:00.0) NSID 1 from core 0: 84069.54 328.40 379.71 22.62 6263.06 00:28:29.294 ======================================================== 00:28:29.294 Total : 84069.54 328.40 379.71 22.62 6263.06 00:28:29.294 00:28:29.294 19:01:40 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.295 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.672 Initializing NVMe Controllers 00:28:30.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:30.672 Initialization complete. Launching workers. 00:28:30.672 ======================================================== 00:28:30.672 Latency(us) 00:28:30.672 Device Information : IOPS MiB/s Average min max 00:28:30.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 11025.64 154.17 44812.27 00:28:30.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18844.05 6982.92 47915.30 00:28:30.672 ======================================================== 00:28:30.672 Total : 149.00 0.58 13911.63 154.17 47915.30 00:28:30.672 00:28:30.672 19:01:42 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.672 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.050 Initializing NVMe Controllers 00:28:32.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:32.050 Initialization complete. Launching workers. 00:28:32.050 ======================================================== 00:28:32.050 Latency(us) 00:28:32.050 Device Information : IOPS MiB/s Average min max 00:28:32.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8601.00 33.60 3721.42 535.32 7672.51 00:28:32.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3908.00 15.27 8253.70 5832.93 15621.97 00:28:32.050 ======================================================== 00:28:32.050 Total : 12509.00 48.86 5137.37 535.32 15621.97 00:28:32.050 00:28:32.050 19:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:32.050 19:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:32.050 19:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:32.050 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.578 Initializing NVMe Controllers 00:28:34.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.578 Controller IO queue size 128, less than required. 00:28:34.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.578 Controller IO queue size 128, less than required. 00:28:34.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:34.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:34.578 Initialization complete. Launching workers. 00:28:34.578 ======================================================== 00:28:34.578 Latency(us) 00:28:34.578 Device Information : IOPS MiB/s Average min max 00:28:34.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1681.99 420.50 76764.40 52422.88 119969.49 00:28:34.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 551.17 137.79 239723.43 69304.77 374273.31 00:28:34.578 ======================================================== 00:28:34.578 Total : 2233.16 558.29 116984.54 52422.88 374273.31 00:28:34.578 00:28:34.836 19:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:34.836 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.122 No valid NVMe controllers or AIO or URING devices found 00:28:35.122 Initializing NVMe Controllers 00:28:35.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.122 Controller IO queue size 128, less than required. 00:28:35.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.122 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:35.122 Controller IO queue size 128, less than required. 00:28:35.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.122 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:35.122 WARNING: Some requested NVMe devices were skipped 00:28:35.122 19:01:46 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:35.122 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.654 Initializing NVMe Controllers 00:28:37.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.654 Controller IO queue size 128, less than required. 00:28:37.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:37.654 Controller IO queue size 128, less than required. 00:28:37.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:37.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:37.654 Initialization complete. Launching workers. 00:28:37.654 00:28:37.654 ==================== 00:28:37.654 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:37.655 TCP transport: 00:28:37.655 polls: 15279 00:28:37.655 idle_polls: 6199 00:28:37.655 sock_completions: 9080 00:28:37.655 nvme_completions: 5373 00:28:37.655 submitted_requests: 8128 00:28:37.655 queued_requests: 1 00:28:37.655 00:28:37.655 ==================== 00:28:37.655 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:37.655 TCP transport: 00:28:37.655 polls: 15817 00:28:37.655 idle_polls: 6361 00:28:37.655 sock_completions: 9456 00:28:37.655 nvme_completions: 5379 00:28:37.655 submitted_requests: 8058 00:28:37.655 queued_requests: 1 00:28:37.655 ======================================================== 00:28:37.655 Latency(us) 00:28:37.655 Device Information : IOPS MiB/s Average min max 00:28:37.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1342.99 335.75 97263.35 56790.51 173556.30 00:28:37.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1344.49 336.12 96942.46 41524.28 143753.87 00:28:37.655 ======================================================== 00:28:37.655 Total : 2687.48 671.87 97102.82 41524.28 173556.30 00:28:37.655 00:28:37.655 19:01:49 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:37.655 19:01:49 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:37.917 19:01:49 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:37.917 19:01:49 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:37.917 19:01:49 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:41.205 19:01:52 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b3832920-3d73-47f8-8ff3-b4506e53d85c 00:28:41.205 19:01:52 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b3832920-3d73-47f8-8ff3-b4506e53d85c 00:28:41.205 19:01:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=b3832920-3d73-47f8-8ff3-b4506e53d85c 00:28:41.205 19:01:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:41.205 19:01:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:41.205 19:01:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:41.205 19:01:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:41.463 { 00:28:41.463 "uuid": "b3832920-3d73-47f8-8ff3-b4506e53d85c", 00:28:41.463 "name": "lvs_0", 00:28:41.463 "base_bdev": "Nvme0n1", 00:28:41.463 "total_data_clusters": 238234, 00:28:41.463 "free_clusters": 238234, 00:28:41.463 "block_size": 512, 00:28:41.463 "cluster_size": 4194304 00:28:41.463 } 00:28:41.463 ]' 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="b3832920-3d73-47f8-8ff3-b4506e53d85c") .free_clusters' 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="b3832920-3d73-47f8-8ff3-b4506e53d85c") .cluster_size' 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:41.463 952936 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:41.463 19:01:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3832920-3d73-47f8-8ff3-b4506e53d85c lbd_0 20480 00:28:42.031 19:01:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=5b2ad8d9-9b0c-4501-84fa-b522d180d8b1 00:28:42.031 19:01:53 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 5b2ad8d9-9b0c-4501-84fa-b522d180d8b1 lvs_n_0 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1069f3db-d522-44a4-a5aa-9a7b87e18c08 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1069f3db-d522-44a4-a5aa-9a7b87e18c08 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=1069f3db-d522-44a4-a5aa-9a7b87e18c08 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:42.967 { 00:28:42.967 "uuid": "b3832920-3d73-47f8-8ff3-b4506e53d85c", 00:28:42.967 "name": "lvs_0", 00:28:42.967 "base_bdev": "Nvme0n1", 00:28:42.967 "total_data_clusters": 238234, 00:28:42.967 "free_clusters": 233114, 00:28:42.967 "block_size": 512, 00:28:42.967 "cluster_size": 4194304 00:28:42.967 }, 00:28:42.967 { 00:28:42.967 "uuid": "1069f3db-d522-44a4-a5aa-9a7b87e18c08", 00:28:42.967 "name": "lvs_n_0", 00:28:42.967 "base_bdev": "5b2ad8d9-9b0c-4501-84fa-b522d180d8b1", 00:28:42.967 "total_data_clusters": 5114, 00:28:42.967 "free_clusters": 5114, 00:28:42.967 "block_size": 512, 00:28:42.967 "cluster_size": 4194304 00:28:42.967 } 00:28:42.967 ]' 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="1069f3db-d522-44a4-a5aa-9a7b87e18c08") .free_clusters' 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:42.967 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="1069f3db-d522-44a4-a5aa-9a7b87e18c08") .cluster_size' 00:28:43.226 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:43.226 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:43.226 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:43.226 20456 00:28:43.226 19:01:54 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:43.226 19:01:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1069f3db-d522-44a4-a5aa-9a7b87e18c08 lbd_nest_0 20456 00:28:43.508 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=393e66c8-6032-4ae0-9512-94b26171a187 00:28:43.508 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:43.508 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:43.508 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 393e66c8-6032-4ae0-9512-94b26171a187 00:28:43.766 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.024 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:44.024 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:44.024 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:44.024 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:44.024 19:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:44.024 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.230 Initializing NVMe Controllers 00:28:56.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:56.230 Initialization complete. Launching workers. 00:28:56.230 ======================================================== 00:28:56.230 Latency(us) 00:28:56.230 Device Information : IOPS MiB/s Average min max 00:28:56.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.29 0.02 22155.14 188.95 44803.00 00:28:56.230 ======================================================== 00:28:56.230 Total : 45.29 0.02 22155.14 188.95 44803.00 00:28:56.230 00:28:56.230 19:02:06 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:56.230 19:02:06 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.230 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.248 Initializing NVMe Controllers 00:29:06.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:06.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:06.248 Initialization complete. Launching workers. 00:29:06.248 ======================================================== 00:29:06.248 Latency(us) 00:29:06.248 Device Information : IOPS MiB/s Average min max 00:29:06.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.20 9.40 13315.16 4985.13 48845.62 00:29:06.248 ======================================================== 00:29:06.248 Total : 75.20 9.40 13315.16 4985.13 48845.62 00:29:06.248 00:29:06.248 19:02:16 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:06.248 19:02:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:06.248 19:02:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.248 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.277 Initializing NVMe Controllers 00:29:16.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.277 Initialization complete. Launching workers. 00:29:16.277 ======================================================== 00:29:16.277 Latency(us) 00:29:16.277 Device Information : IOPS MiB/s Average min max 00:29:16.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7668.60 3.74 4174.18 327.78 11679.74 00:29:16.277 ======================================================== 00:29:16.277 Total : 7668.60 3.74 4174.18 327.78 11679.74 00:29:16.277 00:29:16.277 19:02:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:16.277 19:02:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:16.277 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.267 Initializing NVMe Controllers 00:29:26.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:26.267 Initialization complete. Launching workers. 00:29:26.267 ======================================================== 00:29:26.267 Latency(us) 00:29:26.267 Device Information : IOPS MiB/s Average min max 00:29:26.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3885.00 485.62 8238.12 667.47 20415.94 00:29:26.267 ======================================================== 00:29:26.267 Total : 3885.00 485.62 8238.12 667.47 20415.94 00:29:26.267 00:29:26.267 19:02:37 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:26.267 19:02:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:26.267 19:02:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.267 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.247 Initializing NVMe Controllers 00:29:36.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.247 Controller IO queue size 128, less than required. 00:29:36.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.247 Initialization complete. Launching workers. 00:29:36.247 ======================================================== 00:29:36.247 Latency(us) 00:29:36.247 Device Information : IOPS MiB/s Average min max 00:29:36.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11907.27 5.81 10752.29 1860.09 54051.21 00:29:36.247 ======================================================== 00:29:36.247 Total : 11907.27 5.81 10752.29 1860.09 54051.21 00:29:36.247 00:29:36.247 19:02:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:36.247 19:02:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.247 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.225 Initializing NVMe Controllers 00:29:46.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.225 Controller IO queue size 128, less than required. 00:29:46.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.225 Initialization complete. Launching workers. 00:29:46.225 ======================================================== 00:29:46.225 Latency(us) 00:29:46.225 Device Information : IOPS MiB/s Average min max 00:29:46.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1185.35 148.17 108311.07 15533.37 240954.34 00:29:46.225 ======================================================== 00:29:46.225 Total : 1185.35 148.17 108311.07 15533.37 240954.34 00:29:46.225 00:29:46.225 19:02:57 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.482 19:02:58 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 393e66c8-6032-4ae0-9512-94b26171a187 00:29:47.049 19:02:58 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:47.306 19:02:59 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5b2ad8d9-9b0c-4501-84fa-b522d180d8b1 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:47.873 rmmod nvme_tcp 00:29:47.873 rmmod nvme_fabrics 00:29:47.873 rmmod nvme_keyring 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3631298 ']' 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3631298 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3631298 ']' 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3631298 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:47.873 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3631298 00:29:48.130 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:48.130 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:48.130 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3631298' 00:29:48.130 killing process with pid 3631298 00:29:48.130 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3631298 00:29:48.130 19:02:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3631298 00:29:49.503 19:03:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:49.504 19:03:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:49.504 19:03:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:49.504 19:03:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.504 19:03:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.504 19:03:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.504 19:03:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.504 19:03:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.039 19:03:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:52.039 00:29:52.039 real 1m31.353s 00:29:52.039 user 5m37.456s 00:29:52.039 sys 0m15.975s 00:29:52.039 19:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:52.039 19:03:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:52.039 ************************************ 00:29:52.039 END TEST nvmf_perf 00:29:52.039 ************************************ 00:29:52.039 19:03:03 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:52.039 19:03:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:52.039 19:03:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:52.039 19:03:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.039 ************************************ 00:29:52.039 START TEST nvmf_fio_host 00:29:52.039 ************************************ 00:29:52.039 19:03:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:52.039 * Looking for test storage... 00:29:52.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.039 19:03:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.039 19:03:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.039 19:03:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.039 19:03:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.039 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.039 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:52.040 19:03:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:53.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:53.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.945 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:53.946 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:53.946 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:29:53.946 00:29:53.946 --- 10.0.0.2 ping statistics --- 00:29:53.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.946 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:29:53.946 00:29:53.946 --- 10.0.0.1 ping statistics --- 00:29:53.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.946 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3643391 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3643391 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3643391 ']' 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:53.946 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.946 [2024-07-25 19:03:05.630314] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:29:53.946 [2024-07-25 19:03:05.630400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.946 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.946 [2024-07-25 19:03:05.697425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.946 [2024-07-25 19:03:05.789660] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.946 [2024-07-25 19:03:05.789720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.946 [2024-07-25 19:03:05.789745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.946 [2024-07-25 19:03:05.789757] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.946 [2024-07-25 19:03:05.789767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.946 [2024-07-25 19:03:05.789814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.946 [2024-07-25 19:03:05.789875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.946 [2024-07-25 19:03:05.789918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.946 [2024-07-25 19:03:05.789921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.205 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:54.205 19:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:54.205 19:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:54.463 [2024-07-25 19:03:06.144428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.463 19:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:54.463 19:03:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.463 19:03:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.463 19:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:54.720 Malloc1 00:29:54.720 19:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.978 19:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:55.263 19:03:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.521 [2024-07-25 19:03:07.191784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.521 19:03:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:55.779 19:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:56.037 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:56.037 fio-3.35 00:29:56.037 Starting 1 thread 00:29:56.037 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.565 00:29:58.565 test: (groupid=0, jobs=1): err= 0: pid=3644003: Thu Jul 25 19:03:10 2024 00:29:58.565 read: IOPS=8592, BW=33.6MiB/s (35.2MB/s)(67.3MiB/2006msec) 00:29:58.565 slat (nsec): min=1951, max=154882, avg=2758.18, stdev=2001.54 00:29:58.565 clat (usec): min=2653, max=15063, avg=8152.83, stdev=705.47 00:29:58.565 lat (usec): min=2684, max=15066, avg=8155.59, stdev=705.39 00:29:58.565 clat percentiles (usec): 00:29:58.565 | 1.00th=[ 6521], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7570], 00:29:58.565 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8291], 00:29:58.565 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9241], 00:29:58.565 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[13435], 99.95th=[14353], 00:29:58.565 | 99.99th=[15008] 00:29:58.565 bw ( KiB/s): min=33896, max=35336, per=99.86%, avg=34322.00, stdev=679.05, samples=4 00:29:58.565 iops : min= 8474, max= 8834, avg=8580.50, stdev=169.76, samples=4 00:29:58.565 write: IOPS=8589, BW=33.6MiB/s (35.2MB/s)(67.3MiB/2006msec); 0 zone resets 00:29:58.565 slat (usec): min=2, max=130, avg= 2.91, stdev= 1.77 00:29:58.565 clat (usec): min=1389, max=11825, avg=6694.06, stdev=562.49 00:29:58.565 lat (usec): min=1397, max=11828, avg=6696.97, stdev=562.45 00:29:58.565 clat percentiles (usec): 00:29:58.565 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:29:58.565 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:29:58.565 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7570], 00:29:58.565 | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[ 9765], 99.95th=[10683], 00:29:58.565 | 99.99th=[11863] 00:29:58.565 bw ( KiB/s): min=33856, max=34968, per=100.00%, avg=34358.00, stdev=470.00, samples=4 00:29:58.565 iops : min= 8464, max= 8742, avg=8589.50, stdev=117.50, samples=4 00:29:58.565 lat (msec) : 2=0.02%, 4=0.09%, 10=99.68%, 20=0.21% 00:29:58.565 cpu : usr=61.20%, sys=35.71%, ctx=74, majf=0, minf=31 00:29:58.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:58.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:58.565 issued rwts: total=17236,17231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.565 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:58.565 00:29:58.565 Run status group 0 (all jobs): 00:29:58.565 READ: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.3MiB (70.6MB), run=2006-2006msec 00:29:58.565 WRITE: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.3MiB (70.6MB), run=2006-2006msec 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:58.565 19:03:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:58.565 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:58.565 fio-3.35 00:29:58.565 Starting 1 thread 00:29:58.565 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.093 00:30:01.093 test: (groupid=0, jobs=1): err= 0: pid=3644705: Thu Jul 25 19:03:12 2024 00:30:01.093 read: IOPS=8446, BW=132MiB/s (138MB/s)(265MiB/2005msec) 00:30:01.093 slat (nsec): min=2851, max=94168, avg=3624.94, stdev=1639.09 00:30:01.093 clat (usec): min=1306, max=17062, avg=8796.04, stdev=2150.83 00:30:01.093 lat (usec): min=1311, max=17066, avg=8799.67, stdev=2150.87 00:30:01.093 clat percentiles (usec): 00:30:01.093 | 1.00th=[ 4752], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6915], 00:30:01.093 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9241], 00:30:01.093 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11469], 95.00th=[12649], 00:30:01.093 | 99.00th=[14615], 99.50th=[15139], 99.90th=[16057], 99.95th=[16909], 00:30:01.093 | 99.99th=[17171] 00:30:01.093 bw ( KiB/s): min=62432, max=75488, per=50.58%, avg=68360.00, stdev=6421.80, samples=4 00:30:01.093 iops : min= 3902, max= 4718, avg=4272.50, stdev=401.36, samples=4 00:30:01.093 write: IOPS=4874, BW=76.2MiB/s (79.9MB/s)(140MiB/1839msec); 0 zone resets 00:30:01.093 slat (usec): min=30, max=198, avg=33.81, stdev= 5.56 00:30:01.094 clat (usec): min=4138, max=19991, avg=11094.32, stdev=1863.80 00:30:01.094 lat (usec): min=4170, max=20024, avg=11128.12, stdev=1864.12 00:30:01.094 clat percentiles (usec): 00:30:01.094 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9503], 00:30:01.094 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11338], 00:30:01.094 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13698], 95.00th=[14484], 00:30:01.094 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17695], 99.95th=[19530], 00:30:01.094 | 99.99th=[20055] 00:30:01.094 bw ( KiB/s): min=64800, max=78592, per=91.38%, avg=71264.00, stdev=6961.26, samples=4 00:30:01.094 iops : min= 4050, max= 4912, avg=4454.00, stdev=435.08, samples=4 00:30:01.094 lat (msec) : 2=0.03%, 4=0.14%, 10=56.95%, 20=42.88% 00:30:01.094 cpu : usr=77.54%, sys=20.71%, ctx=43, majf=0, minf=50 00:30:01.094 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:01.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:01.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:01.094 issued rwts: total=16936,8964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:01.094 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:01.094 00:30:01.094 Run status group 0 (all jobs): 00:30:01.094 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=265MiB (277MB), run=2005-2005msec 00:30:01.094 WRITE: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=140MiB (147MB), run=1839-1839msec 00:30:01.094 19:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.351 19:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:01.351 19:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:01.351 19:03:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:01.351 19:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:01.351 19:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:01.351 19:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:01.351 19:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:01.351 19:03:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:01.351 19:03:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:01.351 19:03:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:01.351 19:03:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:04.627 Nvme0n1 00:30:04.627 19:03:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:07.153 19:03:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=54f4dce2-1e68-41dd-8562-24a20b5b4ae6 00:30:07.153 19:03:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 54f4dce2-1e68-41dd-8562-24a20b5b4ae6 00:30:07.153 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=54f4dce2-1e68-41dd-8562-24a20b5b4ae6 00:30:07.153 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:07.153 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:07.153 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:07.153 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:07.410 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:07.410 { 00:30:07.410 "uuid": "54f4dce2-1e68-41dd-8562-24a20b5b4ae6", 00:30:07.410 "name": "lvs_0", 00:30:07.410 "base_bdev": "Nvme0n1", 00:30:07.410 "total_data_clusters": 930, 00:30:07.410 "free_clusters": 930, 00:30:07.410 "block_size": 512, 00:30:07.410 "cluster_size": 1073741824 00:30:07.410 } 00:30:07.410 ]' 00:30:07.411 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="54f4dce2-1e68-41dd-8562-24a20b5b4ae6") .free_clusters' 00:30:07.668 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:07.668 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="54f4dce2-1e68-41dd-8562-24a20b5b4ae6") .cluster_size' 00:30:07.668 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:07.668 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:07.668 19:03:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:07.668 952320 00:30:07.668 19:03:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:07.925 b4fcfa7e-7add-412b-bcb6-162e2f6d1103 00:30:07.925 19:03:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:08.183 19:03:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:08.441 19:03:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:08.698 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.699 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.699 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:08.699 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:08.699 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:08.699 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:08.699 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:08.699 19:03:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:08.956 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:08.956 fio-3.35 00:30:08.956 Starting 1 thread 00:30:08.956 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.481 00:30:11.481 test: (groupid=0, jobs=1): err= 0: pid=3645984: Thu Jul 25 19:03:23 2024 00:30:11.481 read: IOPS=5745, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2009msec) 00:30:11.481 slat (nsec): min=1878, max=146622, avg=2699.55, stdev=2358.70 00:30:11.481 clat (usec): min=1137, max=171463, avg=12199.52, stdev=11856.09 00:30:11.481 lat (usec): min=1140, max=171505, avg=12202.22, stdev=11856.35 00:30:11.481 clat percentiles (msec): 00:30:11.481 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:30:11.481 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:30:11.481 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 13], 00:30:11.481 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:11.481 | 99.99th=[ 171] 00:30:11.481 bw ( KiB/s): min=15896, max=25840, per=99.88%, avg=22956.00, stdev=4721.60, samples=4 00:30:11.481 iops : min= 3974, max= 6460, avg=5739.00, stdev=1180.40, samples=4 00:30:11.481 write: IOPS=5735, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2009msec); 0 zone resets 00:30:11.481 slat (nsec): min=1994, max=104501, avg=2769.89, stdev=1815.42 00:30:11.481 clat (usec): min=323, max=169388, avg=9933.75, stdev=11127.02 00:30:11.481 lat (usec): min=326, max=169393, avg=9936.52, stdev=11127.28 00:30:11.481 clat percentiles (msec): 00:30:11.481 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:30:11.481 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:30:11.481 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:30:11.481 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:30:11.481 | 99.99th=[ 169] 00:30:11.481 bw ( KiB/s): min=16872, max=25272, per=99.91%, avg=22920.00, stdev=4040.35, samples=4 00:30:11.481 iops : min= 4218, max= 6318, avg=5730.00, stdev=1010.09, samples=4 00:30:11.481 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:11.481 lat (msec) : 2=0.03%, 4=0.11%, 10=47.06%, 20=52.22%, 250=0.55% 00:30:11.481 cpu : usr=58.37%, sys=39.34%, ctx=88, majf=0, minf=31 00:30:11.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:11.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.481 issued rwts: total=11543,11522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.481 00:30:11.481 Run status group 0 (all jobs): 00:30:11.481 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2009-2009msec 00:30:11.481 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2009-2009msec 00:30:11.481 19:03:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:11.738 19:03:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=eb1747a5-2963-4445-a0e8-281c2651e15b 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb eb1747a5-2963-4445-a0e8-281c2651e15b 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=eb1747a5-2963-4445-a0e8-281c2651e15b 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:13.109 { 00:30:13.109 "uuid": "54f4dce2-1e68-41dd-8562-24a20b5b4ae6", 00:30:13.109 "name": "lvs_0", 00:30:13.109 "base_bdev": "Nvme0n1", 00:30:13.109 "total_data_clusters": 930, 00:30:13.109 "free_clusters": 0, 00:30:13.109 "block_size": 512, 00:30:13.109 "cluster_size": 1073741824 00:30:13.109 }, 00:30:13.109 { 00:30:13.109 "uuid": "eb1747a5-2963-4445-a0e8-281c2651e15b", 00:30:13.109 "name": "lvs_n_0", 00:30:13.109 "base_bdev": "b4fcfa7e-7add-412b-bcb6-162e2f6d1103", 00:30:13.109 "total_data_clusters": 237847, 00:30:13.109 "free_clusters": 237847, 00:30:13.109 "block_size": 512, 00:30:13.109 "cluster_size": 4194304 00:30:13.109 } 00:30:13.109 ]' 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="eb1747a5-2963-4445-a0e8-281c2651e15b") .free_clusters' 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="eb1747a5-2963-4445-a0e8-281c2651e15b") .cluster_size' 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:13.109 951388 00:30:13.109 19:03:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:13.674 e42d7d22-33dd-47ce-9d4c-7919e8472503 00:30:13.932 19:03:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:14.190 19:03:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:14.190 19:03:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:14.447 19:03:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.447 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.447 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:14.447 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.447 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:14.447 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.448 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:14.448 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:14.448 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.448 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.448 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:14.448 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:14.705 19:03:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.705 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:14.705 fio-3.35 00:30:14.705 Starting 1 thread 00:30:14.705 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.233 00:30:17.233 test: (groupid=0, jobs=1): err= 0: pid=3646717: Thu Jul 25 19:03:28 2024 00:30:17.233 read: IOPS=5889, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2008msec) 00:30:17.233 slat (nsec): min=1954, max=172027, avg=2548.39, stdev=2136.51 00:30:17.233 clat (usec): min=4445, max=20966, avg=11915.90, stdev=1092.73 00:30:17.233 lat (usec): min=4450, max=20969, avg=11918.44, stdev=1092.62 00:30:17.233 clat percentiles (usec): 00:30:17.233 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:30:17.233 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:30:17.233 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:30:17.233 | 99.00th=[14353], 99.50th=[14484], 99.90th=[18744], 99.95th=[19006], 00:30:17.233 | 99.99th=[20841] 00:30:17.233 bw ( KiB/s): min=22520, max=23912, per=99.81%, avg=23514.00, stdev=668.57, samples=4 00:30:17.233 iops : min= 5630, max= 5978, avg=5878.50, stdev=167.14, samples=4 00:30:17.233 write: IOPS=5882, BW=23.0MiB/s (24.1MB/s)(46.1MiB/2008msec); 0 zone resets 00:30:17.233 slat (usec): min=2, max=106, avg= 2.67, stdev= 1.56 00:30:17.233 clat (usec): min=2164, max=17541, avg=9714.22, stdev=902.20 00:30:17.233 lat (usec): min=2170, max=17543, avg=9716.89, stdev=902.17 00:30:17.233 clat percentiles (usec): 00:30:17.233 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 8979], 00:30:17.233 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:30:17.233 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[11076], 00:30:17.233 | 99.00th=[11600], 99.50th=[11994], 99.90th=[16319], 99.95th=[16581], 00:30:17.233 | 99.99th=[17433] 00:30:17.233 bw ( KiB/s): min=23320, max=23680, per=99.91%, avg=23510.00, stdev=166.97, samples=4 00:30:17.233 iops : min= 5830, max= 5920, avg=5877.50, stdev=41.74, samples=4 00:30:17.233 lat (msec) : 4=0.05%, 10=33.00%, 20=66.94%, 50=0.02% 00:30:17.233 cpu : usr=62.93%, sys=34.93%, ctx=98, majf=0, minf=31 00:30:17.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:17.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.233 issued rwts: total=11826,11813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.233 00:30:17.233 Run status group 0 (all jobs): 00:30:17.233 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.4MB), run=2008-2008msec 00:30:17.233 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.4MB), run=2008-2008msec 00:30:17.233 19:03:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:17.491 19:03:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:17.491 19:03:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:21.709 19:03:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:21.709 19:03:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:24.238 19:03:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:24.495 19:03:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:26.394 19:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:26.394 19:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:26.394 19:03:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:26.394 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:26.394 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:26.394 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:26.395 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:26.395 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:26.395 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:26.395 rmmod nvme_tcp 00:30:26.652 rmmod nvme_fabrics 00:30:26.652 rmmod nvme_keyring 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3643391 ']' 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3643391 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3643391 ']' 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3643391 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3643391 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3643391' 00:30:26.652 killing process with pid 3643391 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3643391 00:30:26.652 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3643391 00:30:26.910 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:26.910 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:26.910 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:26.910 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.910 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:26.910 19:03:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.910 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.910 19:03:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.815 19:03:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:28.815 00:30:28.815 real 0m37.152s 00:30:28.815 user 2m22.915s 00:30:28.815 sys 0m6.876s 00:30:28.815 19:03:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:28.815 19:03:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.815 ************************************ 00:30:28.815 END TEST nvmf_fio_host 00:30:28.815 ************************************ 00:30:28.815 19:03:40 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:28.815 19:03:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:28.815 19:03:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.815 19:03:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:28.815 ************************************ 00:30:28.815 START TEST nvmf_failover 00:30:28.815 ************************************ 00:30:28.815 19:03:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:29.073 * Looking for test storage... 00:30:29.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:29.073 19:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.073 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:29.073 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.073 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.073 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.073 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.073 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.073 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:29.074 19:03:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:30.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:30.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:30.977 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:30.977 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:30.977 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:31.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:30:31.236 00:30:31.236 --- 10.0.0.2 ping statistics --- 00:30:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.236 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:30:31.236 00:30:31.236 --- 10.0.0.1 ping statistics --- 00:30:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.236 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3649980 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3649980 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3649980 ']' 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:31.236 19:03:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:31.236 [2024-07-25 19:03:42.941509] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:31.236 [2024-07-25 19:03:42.941582] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.236 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.236 [2024-07-25 19:03:43.014948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:31.236 [2024-07-25 19:03:43.110092] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.236 [2024-07-25 19:03:43.110150] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.236 [2024-07-25 19:03:43.110167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.236 [2024-07-25 19:03:43.110181] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.236 [2024-07-25 19:03:43.110193] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.236 [2024-07-25 19:03:43.110294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.236 [2024-07-25 19:03:43.110357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:31.236 [2024-07-25 19:03:43.110360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.494 19:03:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:31.494 19:03:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:31.494 19:03:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:31.494 19:03:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.494 19:03:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:31.494 19:03:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.494 19:03:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:31.752 [2024-07-25 19:03:43.460972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.752 19:03:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:32.009 Malloc0 00:30:32.009 19:03:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.268 19:03:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.526 19:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.783 [2024-07-25 19:03:44.476794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.783 19:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:33.041 [2024-07-25 19:03:44.721522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:33.041 19:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:33.299 [2024-07-25 19:03:44.966434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:33.299 19:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3650253 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3650253 /var/tmp/bdevperf.sock 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3650253 ']' 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:33.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:33.300 19:03:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:33.558 19:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:33.558 19:03:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:33.559 19:03:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.816 NVMe0n1 00:30:33.816 19:03:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:34.384 00:30:34.384 19:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3650387 00:30:34.384 19:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:34.384 19:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:35.320 19:03:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.578 [2024-07-25 19:03:47.303372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 [2024-07-25 19:03:47.303678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af090 is same with the state(5) to be set 00:30:35.578 19:03:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:38.870 19:03:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.128 00:30:39.128 19:03:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:39.410 [2024-07-25 19:03:51.088359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.088995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.410 [2024-07-25 19:03:51.089179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 [2024-07-25 19:03:51.089365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0610 is same with the state(5) to be set 00:30:39.411 19:03:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:42.694 19:03:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.694 [2024-07-25 19:03:54.389321] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.694 19:03:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:43.630 19:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:43.889 [2024-07-25 19:03:55.654647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.654993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.889 [2024-07-25 19:03:55.655122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 [2024-07-25 19:03:55.655517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0980 is same with the state(5) to be set 00:30:43.890 19:03:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3650387 00:30:50.470 0 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3650253 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3650253 ']' 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3650253 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3650253 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3650253' 00:30:50.470 killing process with pid 3650253 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3650253 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3650253 00:30:50.470 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:50.470 [2024-07-25 19:03:45.028684] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:50.470 [2024-07-25 19:03:45.028776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650253 ] 00:30:50.470 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.470 [2024-07-25 19:03:45.088809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.470 [2024-07-25 19:03:45.177494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.470 Running I/O for 15 seconds... 00:30:50.470 [2024-07-25 19:03:47.305544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.305973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.305988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.470 [2024-07-25 19:03:47.306335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.470 [2024-07-25 19:03:47.306622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.470 [2024-07-25 19:03:47.306635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.306978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.306992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.471 [2024-07-25 19:03:47.307844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.471 [2024-07-25 19:03:47.307875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.307892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.307906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.307924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.307935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.307946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80752 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.307959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.307976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.307987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.307998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80776 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80784 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80808 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80816 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80824 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80832 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.308961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.308974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.308984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.308995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.309007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.309020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.472 [2024-07-25 19:03:47.309031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.472 [2024-07-25 19:03:47.309053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:30:50.472 [2024-07-25 19:03:47.309090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.472 [2024-07-25 19:03:47.309105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81008 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81016 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.309948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.309959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.309975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81056 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.309998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.310010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.310021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.310032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81064 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.310077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.310092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.310104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.310115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81072 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.310128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.310141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.310152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.310164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81080 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.310176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.310189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.310200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.310211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81088 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.310224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.310237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.310248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.310260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81096 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.310272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.310286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.473 [2024-07-25 19:03:47.310297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.473 [2024-07-25 19:03:47.310308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81104 len:8 PRP1 0x0 PRP2 0x0 00:30:50.473 [2024-07-25 19:03:47.310321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.473 [2024-07-25 19:03:47.310334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81112 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81120 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81128 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81136 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81144 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81152 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81160 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81168 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81176 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80344 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.474 [2024-07-25 19:03:47.310885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.474 [2024-07-25 19:03:47.310896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80352 len:8 PRP1 0x0 PRP2 0x0 00:30:50.474 [2024-07-25 19:03:47.310910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.310967] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13e5ef0 was disconnected and freed. reset controller. 00:30:50.474 [2024-07-25 19:03:47.310985] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:50.474 [2024-07-25 19:03:47.311032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.474 [2024-07-25 19:03:47.311066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.311083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.474 [2024-07-25 19:03:47.311097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.311112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.474 [2024-07-25 19:03:47.311125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.311139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.474 [2024-07-25 19:03:47.311152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:47.311166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.474 [2024-07-25 19:03:47.311212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6740 (9): Bad file descriptor 00:30:50.474 [2024-07-25 19:03:47.314572] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.474 [2024-07-25 19:03:47.432789] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:50.474 [2024-07-25 19:03:51.090405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.474 [2024-07-25 19:03:51.090445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:51.090471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.474 [2024-07-25 19:03:51.090488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:51.090504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.474 [2024-07-25 19:03:51.090519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:51.090536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.474 [2024-07-25 19:03:51.090550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:51.090565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.474 [2024-07-25 19:03:51.090585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.474 [2024-07-25 19:03:51.090600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.474 [2024-07-25 19:03:51.090614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.090972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.090986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.091013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.091041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.091096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.091128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.091158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.091187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.091218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.475 [2024-07-25 19:03:51.091246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.475 [2024-07-25 19:03:51.091775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.475 [2024-07-25 19:03:51.091788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.091803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.091817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.091832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.091845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.091860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.091874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.091889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.091902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.091917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.091930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.091945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.091958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.091972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.091986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.476 [2024-07-25 19:03:51.092976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.476 [2024-07-25 19:03:51.092990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.477 [2024-07-25 19:03:51.093282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.477 [2024-07-25 19:03:51.093499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.093546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110352 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.093559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.477 [2024-07-25 19:03:51.093652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.477 [2024-07-25 19:03:51.093682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.477 [2024-07-25 19:03:51.093709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.477 [2024-07-25 19:03:51.093743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.093756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6740 is same with the state(5) to be set 00:30:50.477 [2024-07-25 19:03:51.093993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110360 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110368 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110376 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110384 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110392 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110400 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110408 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110416 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110424 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.477 [2024-07-25 19:03:51.094505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.477 [2024-07-25 19:03:51.094516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110432 len:8 PRP1 0x0 PRP2 0x0 00:30:50.477 [2024-07-25 19:03:51.094529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.477 [2024-07-25 19:03:51.094542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110440 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110448 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110456 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110464 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110472 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110480 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110488 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110496 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.094953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110504 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.094966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.094979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.094989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109712 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109720 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109728 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109736 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109744 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109752 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109760 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109488 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109496 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109504 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109512 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109520 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109528 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.478 [2024-07-25 19:03:51.095671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.478 [2024-07-25 19:03:51.095682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109536 len:8 PRP1 0x0 PRP2 0x0 00:30:50.478 [2024-07-25 19:03:51.095695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.478 [2024-07-25 19:03:51.095707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.095718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.095730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109544 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.095742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.095755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.095766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.095777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109552 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.095790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.095803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.095814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.095825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109560 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.095838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.095852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.095862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.095874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109568 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.095886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.095899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.095910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.095921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109576 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.095937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.095950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.095961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.095973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109584 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.095985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.095998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109592 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109600 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109608 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109616 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109624 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109632 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109640 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109648 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109656 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109664 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109672 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109680 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109688 len:8 PRP1 0x0 PRP2 0x0 00:30:50.479 [2024-07-25 19:03:51.096661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.479 [2024-07-25 19:03:51.096674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.479 [2024-07-25 19:03:51.096684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.479 [2024-07-25 19:03:51.096695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109696 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.096708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.096724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.096735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.096746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109768 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.096759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.096776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.096788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.096799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109776 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.096811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.096825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.096836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.096847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109784 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.096859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.096872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.096883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.096894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109792 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.096907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.096920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.096931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.096942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109800 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.096954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.096967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.096978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.096988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109808 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109816 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109824 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109832 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109840 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109848 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109856 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109864 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109872 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109880 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109888 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109896 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109904 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109912 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109920 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109928 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109936 len:8 PRP1 0x0 PRP2 0x0 00:30:50.480 [2024-07-25 19:03:51.097840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.480 [2024-07-25 19:03:51.097853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.480 [2024-07-25 19:03:51.097864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.480 [2024-07-25 19:03:51.097875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109944 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.097888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.097901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.097915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.097926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109952 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.097939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.097952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.097963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.097975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109960 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.097988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109968 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109976 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109984 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109992 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110000 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110008 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110016 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110024 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110032 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110040 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110048 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110056 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110064 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110072 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110080 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110088 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110096 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110104 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.098953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110112 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.098966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.098979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.098990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.099001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110120 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.099014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.481 [2024-07-25 19:03:51.099027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.481 [2024-07-25 19:03:51.104204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.481 [2024-07-25 19:03:51.104234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110128 len:8 PRP1 0x0 PRP2 0x0 00:30:50.481 [2024-07-25 19:03:51.104250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110136 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110144 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110152 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110160 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110168 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110176 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110184 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110192 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110200 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110208 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110216 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110224 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110232 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110240 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.104959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.104970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110248 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.104983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.104996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.105006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.105018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110256 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.105034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.105073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.105086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.105098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110264 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.105111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.105125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.105137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.105149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110272 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.105162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.105175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.105187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.105199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110280 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.105211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.105225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.105236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.105248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110288 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.105262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.105275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.105286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.105297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109704 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.105310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.105324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.105335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.105347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110296 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.105374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.105388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.482 [2024-07-25 19:03:51.105400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.482 [2024-07-25 19:03:51.105411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110304 len:8 PRP1 0x0 PRP2 0x0 00:30:50.482 [2024-07-25 19:03:51.105423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.482 [2024-07-25 19:03:51.105436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.483 [2024-07-25 19:03:51.105447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.483 [2024-07-25 19:03:51.105461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110312 len:8 PRP1 0x0 PRP2 0x0 00:30:50.483 [2024-07-25 19:03:51.105475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:51.105489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.483 [2024-07-25 19:03:51.105500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.483 [2024-07-25 19:03:51.105512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110320 len:8 PRP1 0x0 PRP2 0x0 00:30:50.483 [2024-07-25 19:03:51.105525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:51.105538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.483 [2024-07-25 19:03:51.105549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.483 [2024-07-25 19:03:51.105560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110328 len:8 PRP1 0x0 PRP2 0x0 00:30:50.483 [2024-07-25 19:03:51.105572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:51.105585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.483 [2024-07-25 19:03:51.105596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.483 [2024-07-25 19:03:51.105607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110336 len:8 PRP1 0x0 PRP2 0x0 00:30:50.483 [2024-07-25 19:03:51.105620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:51.105633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.483 [2024-07-25 19:03:51.105644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.483 [2024-07-25 19:03:51.105655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110344 len:8 PRP1 0x0 PRP2 0x0 00:30:50.483 [2024-07-25 19:03:51.105668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:51.105681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.483 [2024-07-25 19:03:51.105692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.483 [2024-07-25 19:03:51.105703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110352 len:8 PRP1 0x0 PRP2 0x0 00:30:50.483 [2024-07-25 19:03:51.105716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:51.105773] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13e8130 was disconnected and freed. reset controller. 00:30:50.483 [2024-07-25 19:03:51.105791] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:50.483 [2024-07-25 19:03:51.105805] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.483 [2024-07-25 19:03:51.105856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6740 (9): Bad file descriptor 00:30:50.483 [2024-07-25 19:03:51.109148] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.483 [2024-07-25 19:03:51.275470] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:50.483 [2024-07-25 19:03:55.656740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.656782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.656813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.656830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.656846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.656861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.656877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.656892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.656907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.656921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.656935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.656949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.656963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.656976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.656990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.483 [2024-07-25 19:03:55.657409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.483 [2024-07-25 19:03:55.657422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-07-25 19:03:55.657451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-07-25 19:03:55.657479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-07-25 19:03:55.657508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-07-25 19:03:55.657535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-07-25 19:03:55.657563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-07-25 19:03:55.657595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.484 [2024-07-25 19:03:55.657621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.657980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.657994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.484 [2024-07-25 19:03:55.658587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.484 [2024-07-25 19:03:55.658602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.658978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.658992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.485 [2024-07-25 19:03:55.659788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.485 [2024-07-25 19:03:55.659803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.485 [2024-07-25 19:03:55.659816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.659831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.659844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.659858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.659872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.659890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.659904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.659919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.659932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.659947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.659960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.659975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.659988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.660044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.486 [2024-07-25 19:03:55.660098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.486 [2024-07-25 19:03:55.660244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.486 [2024-07-25 19:03:55.660273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.486 [2024-07-25 19:03:55.660300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.486 [2024-07-25 19:03:55.660326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6740 is same with the state(5) to be set 00:30:50.486 [2024-07-25 19:03:55.660595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.660615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.660669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.660725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.660772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.660818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.660866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.660913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.660949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.660960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.660971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.660988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.661001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.661012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.661023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.661050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.661074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.661087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.661098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.661111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.661130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.661142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.661153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.661166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.661179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.661190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.661201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.661214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.661227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.661238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.661249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.661262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.661275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.661286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.661297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:30:50.486 [2024-07-25 19:03:55.661310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.486 [2024-07-25 19:03:55.661323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.486 [2024-07-25 19:03:55.661334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.486 [2024-07-25 19:03:55.661346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.661964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.661975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.661986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.661999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:30:50.487 [2024-07-25 19:03:55.662480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.487 [2024-07-25 19:03:55.662493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.487 [2024-07-25 19:03:55.662503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.487 [2024-07-25 19:03:55.662515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.662957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.662970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.662983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.662995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:30:50.488 [2024-07-25 19:03:55.663519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.488 [2024-07-25 19:03:55.663532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.488 [2024-07-25 19:03:55.663543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.488 [2024-07-25 19:03:55.663555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78464 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78488 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78496 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.663963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.663974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.663985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78504 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.663997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78520 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78544 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78560 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78600 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78608 len:8 PRP1 0x0 PRP2 0x0 00:30:50.489 [2024-07-25 19:03:55.664636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.489 [2024-07-25 19:03:55.664649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.489 [2024-07-25 19:03:55.664661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.489 [2024-07-25 19:03:55.664672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78616 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.664690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.664704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.664715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.664726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78624 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.664738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.664751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.664761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.664772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78632 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.664784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.664796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.664807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.664817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78640 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.664830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.664842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.664853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.664863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78648 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.664879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.664892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.664903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.664913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78656 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.664925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.664938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.664949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.664959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78664 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.664972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.664984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.664995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.665006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78672 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.665018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.665031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.665041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.665052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78680 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.665088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.665104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.665115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.665127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78688 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.665139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.665153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.665164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.665176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78696 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78704 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78712 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78720 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78728 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78736 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78744 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78752 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78760 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.670952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.670963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78768 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.670976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.670989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.671003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.671015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78776 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.671027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.671057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.671080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.671093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78784 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.671106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.671119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.671131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.671142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.671155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.490 [2024-07-25 19:03:55.671169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.490 [2024-07-25 19:03:55.671180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.490 [2024-07-25 19:03:55.671191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:30:50.490 [2024-07-25 19:03:55.671204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78816 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78824 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78832 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78840 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78848 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.671962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.671973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.671984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.671997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.672010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.672021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.672032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.672045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.672080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.672095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.672107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.672120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.672134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.672145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.672157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.672170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.672183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.672194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.672211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.672225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.672238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.672249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.672261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.672275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.672289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.672300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.672312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.672325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.672338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.491 [2024-07-25 19:03:55.672350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.491 [2024-07-25 19:03:55.672376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:30:50.491 [2024-07-25 19:03:55.672390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.491 [2024-07-25 19:03:55.672403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.492 [2024-07-25 19:03:55.672414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.492 [2024-07-25 19:03:55.672426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:30:50.492 [2024-07-25 19:03:55.672439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.492 [2024-07-25 19:03:55.672496] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13e9ff0 was disconnected and freed. reset controller. 00:30:50.492 [2024-07-25 19:03:55.672513] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:50.492 [2024-07-25 19:03:55.672528] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.492 [2024-07-25 19:03:55.672578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6740 (9): Bad file descriptor 00:30:50.492 [2024-07-25 19:03:55.675834] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.492 [2024-07-25 19:03:55.714991] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:50.492 00:30:50.492 Latency(us) 00:30:50.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.492 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:50.492 Verification LBA range: start 0x0 length 0x4000 00:30:50.492 NVMe0n1 : 15.00 8635.37 33.73 829.41 0.00 13497.34 585.58 26214.40 00:30:50.492 =================================================================================================================== 00:30:50.492 Total : 8635.37 33.73 829.41 0.00 13497.34 585.58 26214.40 00:30:50.492 Received shutdown signal, test time was about 15.000000 seconds 00:30:50.492 00:30:50.492 Latency(us) 00:30:50.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.492 =================================================================================================================== 00:30:50.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3652230 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3652230 /var/tmp/bdevperf.sock 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3652230 ']' 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:50.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:50.492 [2024-07-25 19:04:01.974570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:50.492 19:04:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:50.492 [2024-07-25 19:04:02.227204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:50.492 19:04:02 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.060 NVMe0n1 00:30:51.060 19:04:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.318 00:30:51.318 19:04:03 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.884 00:30:51.884 19:04:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:51.884 19:04:03 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:51.884 19:04:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:52.141 19:04:03 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:55.431 19:04:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.431 19:04:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:55.431 19:04:07 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3652894 00:30:55.431 19:04:07 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:55.431 19:04:07 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3652894 00:30:56.808 0 00:30:56.808 19:04:08 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:56.808 [2024-07-25 19:04:01.508866] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:56.808 [2024-07-25 19:04:01.508961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652230 ] 00:30:56.808 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.808 [2024-07-25 19:04:01.569426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.808 [2024-07-25 19:04:01.652627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.808 [2024-07-25 19:04:03.965941] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:56.808 [2024-07-25 19:04:03.966035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 19:04:03.966057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 19:04:03.966097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 19:04:03.966112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 19:04:03.966127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 19:04:03.966140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 19:04:03.966155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 19:04:03.966169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 19:04:03.966183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.808 [2024-07-25 19:04:03.966224] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.808 [2024-07-25 19:04:03.966256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56f740 (9): Bad file descriptor 00:30:56.808 [2024-07-25 19:04:04.013413] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:56.808 Running I/O for 1 seconds... 00:30:56.808 00:30:56.808 Latency(us) 00:30:56.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.808 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:56.808 Verification LBA range: start 0x0 length 0x4000 00:30:56.808 NVMe0n1 : 1.01 8918.75 34.84 0.00 0.00 14293.11 3070.48 13883.92 00:30:56.808 =================================================================================================================== 00:30:56.808 Total : 8918.75 34.84 0.00 0.00 14293.11 3070.48 13883.92 00:30:56.808 19:04:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:56.808 19:04:08 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:56.808 19:04:08 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.066 19:04:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:57.066 19:04:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:57.324 19:04:09 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.582 19:04:09 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:00.867 19:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.867 19:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:00.867 19:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3652230 00:31:00.867 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3652230 ']' 00:31:00.867 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3652230 00:31:00.867 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:00.867 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:00.867 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3652230 00:31:01.124 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:01.124 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:01.124 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3652230' 00:31:01.124 killing process with pid 3652230 00:31:01.124 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3652230 00:31:01.124 19:04:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3652230 00:31:01.124 19:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:01.124 19:04:12 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:01.690 rmmod nvme_tcp 00:31:01.690 rmmod nvme_fabrics 00:31:01.690 rmmod nvme_keyring 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3649980 ']' 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3649980 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3649980 ']' 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3649980 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3649980 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3649980' 00:31:01.690 killing process with pid 3649980 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3649980 00:31:01.690 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3649980 00:31:01.947 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:01.947 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:01.947 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:01.947 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:01.947 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:01.947 19:04:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.947 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.947 19:04:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.921 19:04:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:03.921 00:31:03.921 real 0m34.993s 00:31:03.921 user 2m3.108s 00:31:03.921 sys 0m5.923s 00:31:03.921 19:04:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:03.921 19:04:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 ************************************ 00:31:03.921 END TEST nvmf_failover 00:31:03.921 ************************************ 00:31:03.921 19:04:15 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:03.921 19:04:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:03.921 19:04:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:03.921 19:04:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 ************************************ 00:31:03.921 START TEST nvmf_host_discovery 00:31:03.921 ************************************ 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:03.921 * Looking for test storage... 00:31:03.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.921 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:03.922 19:04:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:06.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:06.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:06.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:06.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.456 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:06.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:31:06.457 00:31:06.457 --- 10.0.0.2 ping statistics --- 00:31:06.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.457 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:31:06.457 00:31:06.457 --- 10.0.0.1 ping statistics --- 00:31:06.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.457 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3655496 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3655496 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3655496 ']' 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:06.457 19:04:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 [2024-07-25 19:04:17.958366] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:06.457 [2024-07-25 19:04:17.958435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.457 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.457 [2024-07-25 19:04:18.027315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.457 [2024-07-25 19:04:18.122756] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.457 [2024-07-25 19:04:18.122819] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.457 [2024-07-25 19:04:18.122836] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.457 [2024-07-25 19:04:18.122850] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.457 [2024-07-25 19:04:18.122861] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.457 [2024-07-25 19:04:18.122893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 [2024-07-25 19:04:18.258744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 [2024-07-25 19:04:18.266934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 null0 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 null1 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3655630 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3655630 /tmp/host.sock 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3655630 ']' 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:06.457 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:06.457 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.717 [2024-07-25 19:04:18.340491] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:06.717 [2024-07-25 19:04:18.340574] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655630 ] 00:31:06.717 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.717 [2024-07-25 19:04:18.402311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.717 [2024-07-25 19:04:18.493656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.976 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.977 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.236 [2024-07-25 19:04:18.900639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.236 19:04:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:07.236 19:04:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:07.806 [2024-07-25 19:04:19.676235] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:07.806 [2024-07-25 19:04:19.676263] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:07.806 [2024-07-25 19:04:19.676286] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:08.066 [2024-07-25 19:04:19.762587] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:08.066 [2024-07-25 19:04:19.940565] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:08.066 [2024-07-25 19:04:19.940596] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:08.325 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.584 19:04:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.521 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.781 [2024-07-25 19:04:21.416129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:09.781 [2024-07-25 19:04:21.416655] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:09.781 [2024-07-25 19:04:21.416693] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:09.781 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.781 [2024-07-25 19:04:21.543070] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:09.782 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:09.782 19:04:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:09.782 [2024-07-25 19:04:21.602562] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:09.782 [2024-07-25 19:04:21.602588] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:09.782 [2024-07-25 19:04:21.602599] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:10.716 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.976 [2024-07-25 19:04:22.636043] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:10.976 [2024-07-25 19:04:22.636115] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.976 [2024-07-25 19:04:22.644475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.976 [2024-07-25 19:04:22.644512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.976 [2024-07-25 19:04:22.644538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.976 [2024-07-25 19:04:22.644580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.976 [2024-07-25 19:04:22.644601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.976 [2024-07-25 19:04:22.644620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.976 [2024-07-25 19:04:22.644634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.976 [2024-07-25 19:04:22.644647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.976 [2024-07-25 19:04:22.644661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x529da0 is same with the state(5) to be set 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.976 [2024-07-25 19:04:22.654480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x529da0 (9): Bad file descriptor 00:31:10.976 [2024-07-25 19:04:22.664524] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.976 [2024-07-25 19:04:22.664779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.976 [2024-07-25 19:04:22.664808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x529da0 with addr=10.0.0.2, port=4420 00:31:10.976 [2024-07-25 19:04:22.664825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x529da0 is same with the state(5) to be set 00:31:10.976 [2024-07-25 19:04:22.664858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x529da0 (9): Bad file descriptor 00:31:10.976 [2024-07-25 19:04:22.664878] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.976 [2024-07-25 19:04:22.664892] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.976 [2024-07-25 19:04:22.664907] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.976 [2024-07-25 19:04:22.664927] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.976 [2024-07-25 19:04:22.674610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.976 [2024-07-25 19:04:22.674806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.976 [2024-07-25 19:04:22.674837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x529da0 with addr=10.0.0.2, port=4420 00:31:10.976 [2024-07-25 19:04:22.674854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x529da0 is same with the state(5) to be set 00:31:10.976 [2024-07-25 19:04:22.674879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x529da0 (9): Bad file descriptor 00:31:10.976 [2024-07-25 19:04:22.674909] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.976 [2024-07-25 19:04:22.674926] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.976 [2024-07-25 19:04:22.674941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.976 [2024-07-25 19:04:22.674961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.976 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.976 [2024-07-25 19:04:22.684701] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.976 [2024-07-25 19:04:22.684905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.976 [2024-07-25 19:04:22.684937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x529da0 with addr=10.0.0.2, port=4420 00:31:10.976 [2024-07-25 19:04:22.684955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x529da0 is same with the state(5) to be set 00:31:10.976 [2024-07-25 19:04:22.684980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x529da0 (9): Bad file descriptor 00:31:10.976 [2024-07-25 19:04:22.685002] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.976 [2024-07-25 19:04:22.685017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.976 [2024-07-25 19:04:22.685031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.976 [2024-07-25 19:04:22.685052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.976 [2024-07-25 19:04:22.694779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.976 [2024-07-25 19:04:22.694937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.976 [2024-07-25 19:04:22.694965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x529da0 with addr=10.0.0.2, port=4420 00:31:10.976 [2024-07-25 19:04:22.694981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x529da0 is same with the state(5) to be set 00:31:10.976 [2024-07-25 19:04:22.695003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x529da0 (9): Bad file descriptor 00:31:10.976 [2024-07-25 19:04:22.695022] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.976 [2024-07-25 19:04:22.695036] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.977 [2024-07-25 19:04:22.695075] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.977 [2024-07-25 19:04:22.695097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.977 [2024-07-25 19:04:22.704864] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.977 [2024-07-25 19:04:22.705083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.977 [2024-07-25 19:04:22.705110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x529da0 with addr=10.0.0.2, port=4420 00:31:10.977 [2024-07-25 19:04:22.705127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x529da0 is same with the state(5) to be set 00:31:10.977 [2024-07-25 19:04:22.705149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x529da0 (9): Bad file descriptor 00:31:10.977 [2024-07-25 19:04:22.705169] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.977 [2024-07-25 19:04:22.705183] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.977 [2024-07-25 19:04:22.705195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.977 [2024-07-25 19:04:22.705214] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.977 [2024-07-25 19:04:22.714951] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.977 [2024-07-25 19:04:22.715138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.977 [2024-07-25 19:04:22.715165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x529da0 with addr=10.0.0.2, port=4420 00:31:10.977 [2024-07-25 19:04:22.715181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x529da0 is same with the state(5) to be set 00:31:10.977 [2024-07-25 19:04:22.715203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x529da0 (9): Bad file descriptor 00:31:10.977 [2024-07-25 19:04:22.715224] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:10.977 [2024-07-25 19:04:22.715237] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:10.977 [2024-07-25 19:04:22.715250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:10.977 [2024-07-25 19:04:22.715268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:10.977 [2024-07-25 19:04:22.722350] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:10.977 [2024-07-25 19:04:22.722379] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:10.977 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.236 19:04:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.175 [2024-07-25 19:04:23.997735] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:12.175 [2024-07-25 19:04:23.997772] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:12.175 [2024-07-25 19:04:23.997797] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:12.435 [2024-07-25 19:04:24.126221] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:12.694 [2024-07-25 19:04:24.434418] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:12.694 [2024-07-25 19:04:24.434480] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.694 request: 00:31:12.694 { 00:31:12.694 "name": "nvme", 00:31:12.694 "trtype": "tcp", 00:31:12.694 "traddr": "10.0.0.2", 00:31:12.694 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:12.694 "adrfam": "ipv4", 00:31:12.694 "trsvcid": "8009", 00:31:12.694 "wait_for_attach": true, 00:31:12.694 "method": "bdev_nvme_start_discovery", 00:31:12.694 "req_id": 1 00:31:12.694 } 00:31:12.694 Got JSON-RPC error response 00:31:12.694 response: 00:31:12.694 { 00:31:12.694 "code": -17, 00:31:12.694 "message": "File exists" 00:31:12.694 } 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.694 request: 00:31:12.694 { 00:31:12.694 "name": "nvme_second", 00:31:12.694 "trtype": "tcp", 00:31:12.694 "traddr": "10.0.0.2", 00:31:12.694 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:12.694 "adrfam": "ipv4", 00:31:12.694 "trsvcid": "8009", 00:31:12.694 "wait_for_attach": true, 00:31:12.694 "method": "bdev_nvme_start_discovery", 00:31:12.694 "req_id": 1 00:31:12.694 } 00:31:12.694 Got JSON-RPC error response 00:31:12.694 response: 00:31:12.694 { 00:31:12.694 "code": -17, 00:31:12.694 "message": "File exists" 00:31:12.694 } 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:12.694 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.954 19:04:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.894 [2024-07-25 19:04:25.641905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.894 [2024-07-25 19:04:25.641973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527b00 with addr=10.0.0.2, port=8010 00:31:13.894 [2024-07-25 19:04:25.642008] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:13.894 [2024-07-25 19:04:25.642024] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:13.894 [2024-07-25 19:04:25.642038] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:14.833 [2024-07-25 19:04:26.644457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.833 [2024-07-25 19:04:26.644532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527b00 with addr=10.0.0.2, port=8010 00:31:14.833 [2024-07-25 19:04:26.644563] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:14.833 [2024-07-25 19:04:26.644578] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:14.833 [2024-07-25 19:04:26.644591] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:16.208 [2024-07-25 19:04:27.646527] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:16.208 request: 00:31:16.208 { 00:31:16.208 "name": "nvme_second", 00:31:16.208 "trtype": "tcp", 00:31:16.208 "traddr": "10.0.0.2", 00:31:16.208 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:16.208 "adrfam": "ipv4", 00:31:16.208 "trsvcid": "8010", 00:31:16.208 "attach_timeout_ms": 3000, 00:31:16.208 "method": "bdev_nvme_start_discovery", 00:31:16.208 "req_id": 1 00:31:16.208 } 00:31:16.208 Got JSON-RPC error response 00:31:16.208 response: 00:31:16.208 { 00:31:16.208 "code": -110, 00:31:16.208 "message": "Connection timed out" 00:31:16.208 } 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3655630 00:31:16.208 19:04:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:16.209 rmmod nvme_tcp 00:31:16.209 rmmod nvme_fabrics 00:31:16.209 rmmod nvme_keyring 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3655496 ']' 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3655496 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3655496 ']' 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3655496 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3655496 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3655496' 00:31:16.209 killing process with pid 3655496 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3655496 00:31:16.209 19:04:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3655496 00:31:16.209 19:04:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:16.209 19:04:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.209 19:04:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.209 19:04:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.209 19:04:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.209 19:04:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.209 19:04:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:16.209 19:04:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:18.743 00:31:18.743 real 0m14.372s 00:31:18.743 user 0m21.377s 00:31:18.743 sys 0m2.922s 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.743 ************************************ 00:31:18.743 END TEST nvmf_host_discovery 00:31:18.743 ************************************ 00:31:18.743 19:04:30 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:18.743 19:04:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:18.743 19:04:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:18.743 19:04:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:18.743 ************************************ 00:31:18.743 START TEST nvmf_host_multipath_status 00:31:18.743 ************************************ 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:18.743 * Looking for test storage... 00:31:18.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.743 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:18.744 19:04:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:20.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:20.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:20.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.647 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:20.648 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:20.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:31:20.648 00:31:20.648 --- 10.0.0.2 ping statistics --- 00:31:20.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.648 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:31:20.648 00:31:20.648 --- 10.0.0.1 ping statistics --- 00:31:20.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.648 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3658795 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3658795 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3658795 ']' 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.648 [2024-07-25 19:04:32.242530] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:20.648 [2024-07-25 19:04:32.242610] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.648 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.648 [2024-07-25 19:04:32.308623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:20.648 [2024-07-25 19:04:32.399292] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.648 [2024-07-25 19:04:32.399361] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.648 [2024-07-25 19:04:32.399378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.648 [2024-07-25 19:04:32.399391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.648 [2024-07-25 19:04:32.399403] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.648 [2024-07-25 19:04:32.399529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.648 [2024-07-25 19:04:32.399535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.648 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.907 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.907 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3658795 00:31:20.907 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:20.907 [2024-07-25 19:04:32.751684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.907 19:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:21.165 Malloc0 00:31:21.423 19:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:21.423 19:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:21.687 19:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.944 [2024-07-25 19:04:33.763355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.944 19:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:22.202 [2024-07-25 19:04:34.004002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:22.202 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3658963 00:31:22.202 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:22.202 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3658963 /var/tmp/bdevperf.sock 00:31:22.202 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3658963 ']' 00:31:22.203 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:22.203 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:22.203 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:22.203 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:22.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:22.203 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:22.203 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:22.496 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:22.496 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:22.496 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:22.754 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:23.322 Nvme0n1 00:31:23.322 19:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:23.581 Nvme0n1 00:31:23.581 19:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:23.581 19:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:25.484 19:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:25.484 19:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:25.744 19:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:26.003 19:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:27.381 19:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:27.381 19:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:27.381 19:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.381 19:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.381 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.382 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:27.382 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.382 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:27.639 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.639 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:27.639 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.639 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:27.897 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.897 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:27.898 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.898 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.156 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.156 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.156 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.156 19:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.414 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.414 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:28.414 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.414 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:28.672 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.672 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:28.672 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:28.930 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:29.189 19:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:30.126 19:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:30.126 19:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:30.126 19:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.126 19:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.384 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:30.384 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:30.384 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.384 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:30.642 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.642 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:30.642 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.642 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:30.900 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.900 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:30.900 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.900 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.158 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.158 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.158 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.158 19:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.416 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.416 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:31.416 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.416 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:31.672 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.672 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:31.672 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:31.929 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:32.189 19:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:33.126 19:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:33.126 19:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:33.126 19:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.126 19:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:33.383 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.383 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:33.383 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.383 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:33.641 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.641 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:33.641 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.641 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:33.899 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.899 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:33.899 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.899 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:34.157 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.157 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:34.157 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.157 19:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:34.416 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.416 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:34.416 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.416 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:34.674 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.674 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:34.674 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:34.932 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:35.190 19:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:36.123 19:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:36.123 19:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:36.123 19:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.123 19:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:36.382 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.382 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:36.382 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.382 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:36.640 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.640 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:36.640 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.640 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:36.898 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.898 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:36.898 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.898 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:37.156 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.156 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:37.156 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.156 19:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:37.414 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.414 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:37.414 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.414 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:37.672 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.672 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:37.672 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:37.931 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:38.191 19:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:39.176 19:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:39.176 19:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:39.176 19:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.176 19:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:39.434 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.434 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:39.434 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.435 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:39.693 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.693 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:39.693 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.693 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:39.951 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.951 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:39.951 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.951 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:40.210 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.210 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:40.210 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.210 19:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:40.467 19:04:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.467 19:04:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:40.467 19:04:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.467 19:04:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:40.723 19:04:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.723 19:04:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:40.723 19:04:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:40.979 19:04:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:41.239 19:04:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:42.171 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:42.171 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:42.171 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.171 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:42.429 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.429 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:42.429 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.429 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:42.686 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.686 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:42.686 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.686 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:42.943 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.943 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:42.943 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.943 19:04:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:43.201 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.201 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:43.201 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.201 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:43.459 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.459 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:43.459 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.459 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:43.716 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.716 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:43.974 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:43.974 19:04:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:44.231 19:04:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:44.489 19:04:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:45.425 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:45.425 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:45.684 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.684 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:45.942 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.942 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:45.942 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.942 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:46.200 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.200 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:46.200 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.200 19:04:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:46.200 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.200 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:46.200 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.200 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:46.457 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.457 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:46.457 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.457 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:46.716 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.716 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:46.716 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.716 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.974 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.974 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:46.974 19:04:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:47.231 19:04:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:47.491 19:04:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:48.867 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:48.867 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:48.867 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.867 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:48.867 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:48.867 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:48.867 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.867 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:49.124 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.124 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:49.124 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.124 19:05:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:49.381 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.381 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:49.381 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.381 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:49.639 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.639 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:49.639 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.639 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:49.897 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.897 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:49.897 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.897 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:50.155 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.155 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:50.155 19:05:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:50.413 19:05:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:50.672 19:05:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:51.618 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:51.618 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:51.618 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.618 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:51.877 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.877 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:51.877 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.877 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.133 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.134 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.134 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.134 19:05:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:52.391 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.391 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:52.391 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.391 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.649 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.649 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.649 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.649 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:52.908 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.908 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:52.908 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.908 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.166 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.166 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:53.166 19:05:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:53.424 19:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:53.682 19:05:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:54.619 19:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:54.619 19:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:54.619 19:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.619 19:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:55.222 19:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.222 19:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:55.222 19:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.222 19:05:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:55.222 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.222 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:55.222 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.222 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:55.481 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.481 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:55.481 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.481 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.739 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.739 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:55.739 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.739 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.997 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.997 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:55.997 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.997 19:05:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3658963 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3658963 ']' 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3658963 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3658963 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3658963' 00:31:56.256 killing process with pid 3658963 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3658963 00:31:56.256 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3658963 00:31:56.540 Connection closed with partial response: 00:31:56.540 00:31:56.540 00:31:56.540 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3658963 00:31:56.540 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:56.540 [2024-07-25 19:04:34.067712] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:56.540 [2024-07-25 19:04:34.067790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658963 ] 00:31:56.540 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.540 [2024-07-25 19:04:34.131132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.540 [2024-07-25 19:04:34.217128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.540 Running I/O for 90 seconds... 00:31:56.540 [2024-07-25 19:04:49.707613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.707686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.707723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.707742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.707765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.707782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.707806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.707823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.707846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.707863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.707885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.707902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.707924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.707940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.707963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.707979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.708979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.708995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.540 [2024-07-25 19:04:49.709522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.540 [2024-07-25 19:04:49.709538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.709974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.709995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.710972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.710994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.711477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.541 [2024-07-25 19:04:49.711514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.711536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.541 [2024-07-25 19:04:49.711552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.712972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.712994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.713010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.541 [2024-07-25 19:04:49.713032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.541 [2024-07-25 19:04:49.713048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.713097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.713152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.713192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.713231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.713852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-25 19:04:49.713889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.713926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.713963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.713984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.714513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.714530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.715971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.715992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.542 [2024-07-25 19:04:49.716007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.542 [2024-07-25 19:04:49.716029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.716965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.716981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.543 [2024-07-25 19:04:49.717936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.543 [2024-07-25 19:04:49.717952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.717973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.717989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.718842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.718866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.718892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.718911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.718933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.718950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.718977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.718994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.544 [2024-07-25 19:04:49.719754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.719793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.719831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.719870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.719908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.719955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.719979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.719995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.720018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.720056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.720086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.720111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.720127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.731348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.731379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.731404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.731421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.731443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.731459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.731497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.731513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.731535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-25 19:04:49.731551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.544 [2024-07-25 19:04:49.731573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.545 [2024-07-25 19:04:49.731590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.545 [2024-07-25 19:04:49.731673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.731978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.731994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.732030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.732094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.732135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.732172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.732216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.732254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.732293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.732958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.732986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.545 [2024-07-25 19:04:49.733736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.545 [2024-07-25 19:04:49.733752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.733789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.733813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.733836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.733852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.733876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.733892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.733914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.733930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.733952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.733969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.733990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.734970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.734991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.546 [2024-07-25 19:04:49.735343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.546 [2024-07-25 19:04:49.735380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.735838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.735855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.547 [2024-07-25 19:04:49.736682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.547 [2024-07-25 19:04:49.736727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.736766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.736805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.736843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.736880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.736919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.736956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.736979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.736995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.547 [2024-07-25 19:04:49.737420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.547 [2024-07-25 19:04:49.737436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.737489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.737527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.737564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.737605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.737965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.737981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.738017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.738083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.738123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.738162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.738200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.548 [2024-07-25 19:04:49.738277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.738835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.738851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.739440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.739463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.739490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.739508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.739531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.548 [2024-07-25 19:04:49.739548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.548 [2024-07-25 19:04:49.739569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.739974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.739996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.740968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.740991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.741008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.741030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.741047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.741076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.741094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.549 [2024-07-25 19:04:49.741117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.549 [2024-07-25 19:04:49.741138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.741970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.741986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.742007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.742022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.742066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.742084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.742119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.742136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.742158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.742174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.742196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.742212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.742234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.742249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.742271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.742288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.742310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.742326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.743178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.550 [2024-07-25 19:04:49.743223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.550 [2024-07-25 19:04:49.743262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.743300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.743339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.743376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.743423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.550 [2024-07-25 19:04:49.743461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.550 [2024-07-25 19:04:49.743482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.743976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.743998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.744015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.744076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.744117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.744155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.551 [2024-07-25 19:04:49.744776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.551 [2024-07-25 19:04:49.744813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.551 [2024-07-25 19:04:49.744835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.744850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.744876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.744892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.744913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.744945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.744969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.744992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.745962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.745985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.746952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.746973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.552 [2024-07-25 19:04:49.747010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.552 [2024-07-25 19:04:49.747027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.747959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.747989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.553 [2024-07-25 19:04:49.748603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.553 [2024-07-25 19:04:49.748624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.748640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.748662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.748677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.748699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.748714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.748735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.748751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.748773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.748788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.748810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.748826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.748847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.748863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.748885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.748900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.749691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.749715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.749743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.749765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.749789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.554 [2024-07-25 19:04:49.749806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.749828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.554 [2024-07-25 19:04:49.749845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.749866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.749883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.749905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.749921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.749943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.749959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.749981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.749997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.554 [2024-07-25 19:04:49.750712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.554 [2024-07-25 19:04:49.750754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.554 [2024-07-25 19:04:49.750791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.554 [2024-07-25 19:04:49.750828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.554 [2024-07-25 19:04:49.750864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.554 [2024-07-25 19:04:49.750901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.554 [2024-07-25 19:04:49.750923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.750938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.750959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.750975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.750996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.555 [2024-07-25 19:04:49.751403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.751865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.751881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.752967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.752998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.555 [2024-07-25 19:04:49.753022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.555 [2024-07-25 19:04:49.753038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.753973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.753995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.556 [2024-07-25 19:04:49.754370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.556 [2024-07-25 19:04:49.754387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.754971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.754992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.755382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.755399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.557 [2024-07-25 19:04:49.756359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.557 [2024-07-25 19:04:49.756397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.557 [2024-07-25 19:04:49.756708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.557 [2024-07-25 19:04:49.756730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.756746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.756768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.756784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.756806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.756822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.756843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.756860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.756881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.756898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.756920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.756936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.756959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.756976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.756998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.558 [2024-07-25 19:04:49.757921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.757982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.757999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.758025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.758043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.758072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.758090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.758113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.758129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.758151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.758167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.758189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.758207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.558 [2024-07-25 19:04:49.758229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.558 [2024-07-25 19:04:49.758245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.758267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.758283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.758305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.758321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.758343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.758359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.758977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.759973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.759989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.559 [2024-07-25 19:04:49.760435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.559 [2024-07-25 19:04:49.760472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.760954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.760991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.560 [2024-07-25 19:04:49.761772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.560 [2024-07-25 19:04:49.761788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.761810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.761826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.761847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.761863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-07-25 19:04:49.768637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-07-25 19:04:49.768690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.768966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.768993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.561 [2024-07-25 19:04:49.769670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-07-25 19:04:49.769713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-07-25 19:04:49.769765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-07-25 19:04:49.769809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-07-25 19:04:49.769852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-07-25 19:04:49.769895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-07-25 19:04:49.769937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.561 [2024-07-25 19:04:49.769964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.769980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.562 [2024-07-25 19:04:49.770472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.770890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.770907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:04:49.771077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:04:49.771100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.463968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.463984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.464005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.562 [2024-07-25 19:05:05.464021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.562 [2024-07-25 19:05:05.464041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.464505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.464522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.465138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.465178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.465217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.563 [2024-07-25 19:05:05.465856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.465962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.465984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.466001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.466023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.466039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.466070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.466088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.466111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.466128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.466150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.466167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.563 [2024-07-25 19:05:05.466188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.563 [2024-07-25 19:05:05.466205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.466231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.466248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.466270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.466287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.466309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.466326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.466357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.466374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.466396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.466412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.466434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.466451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.466473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.466490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.466512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.466529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.467610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.467965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.467982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.468004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.468021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.468043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.468068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.468094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.468111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.468133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.468150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.468173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.468190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.468581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-25 19:05:05.468604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.468631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.564 [2024-07-25 19:05:05.468650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.564 [2024-07-25 19:05:05.468673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.468689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.468712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.468733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.468756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.468773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.468796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.468812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.468834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.468851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.468888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.468905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.468927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.468944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.468965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.468981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.469018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.469368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.469946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.469968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.469985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.470024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.470085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.470134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.470173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.470212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.470252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.565 [2024-07-25 19:05:05.470576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.470621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.470661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.565 [2024-07-25 19:05:05.470683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-25 19:05:05.470699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.470721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.470738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.470760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.470776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.470804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.470820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.470847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.470869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.470893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.470910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.473408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.473469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.473508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.473545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.473583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.473620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.473658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.473694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.473732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.473768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.473790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.473811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.476256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.476308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.476350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.476398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.476978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.476999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.477016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.477038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.477054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.477093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.477111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.477133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.477149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.477171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.566 [2024-07-25 19:05:05.477187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.477210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.477226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.477253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.566 [2024-07-25 19:05:05.477270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.566 [2024-07-25 19:05:05.477292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.477501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.477630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.477667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.477704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.477915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.477931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.478787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.478812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.478839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.478873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.478897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.478913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.480699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.480738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.480781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.480800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.480823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.480839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.480862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.480883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.480913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.480930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.480952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.480969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.480991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.481250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.567 [2024-07-25 19:05:05.481288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.567 [2024-07-25 19:05:05.481608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.567 [2024-07-25 19:05:05.481623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.481659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.481695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.481730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.481767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.481803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.481840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.481898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.481951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.481973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.481990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.482013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.482029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.482051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.482075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.482099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.482116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.482138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.482154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.482176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.482194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.482216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.482233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.482255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.482271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.483732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.483757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.483784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.483802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.483823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.483845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.483883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.483900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.483921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.483936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.483957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.483972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.484217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.484257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.484637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.484691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.484743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.568 [2024-07-25 19:05:05.484783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.568 [2024-07-25 19:05:05.484843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.568 [2024-07-25 19:05:05.484860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.484882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.484898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.484924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.484942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.484964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.484981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.485020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.485065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.485726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.485771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.485811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.485865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.485904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.485956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.485977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.485993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.486013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.486029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.486081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.486100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.486139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.486156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.486178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.486195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.486217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.486234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.486256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.486272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.486295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.486312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.487805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.487831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.487873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.487891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.487914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.487945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.487967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.487983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.488035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.488104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.488156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.488194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.488233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.488271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.488310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.569 [2024-07-25 19:05:05.488348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.488387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.488440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.569 [2024-07-25 19:05:05.488477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.569 [2024-07-25 19:05:05.488493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.488530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.488566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.488603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.488660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.488715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.488755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.488794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.488832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.488871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.488909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.488948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.488970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.488987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.489041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.489103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.489144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.489189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.489230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.489268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.489307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.489345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.489399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.489452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.489489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.489526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.489547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.489562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.490706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.490730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.490757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.490774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.490795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.490810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.490836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.490853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.490873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.490889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.490909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.490924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.490945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.490960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.490981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.570 [2024-07-25 19:05:05.490997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.570 [2024-07-25 19:05:05.491809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.570 [2024-07-25 19:05:05.491832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.491848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.491870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.491886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.491909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.491926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.491967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.491985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.492022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.492086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.492126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.492165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.492204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.492246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.492287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.492325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.492364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.492403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.492441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.492479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.492518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.492534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.493089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.493534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.493550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.495638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.495782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.571 [2024-07-25 19:05:05.495818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.571 [2024-07-25 19:05:05.495911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.571 [2024-07-25 19:05:05.495926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.495951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.495967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.495988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.496003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.496024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.496053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.496086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.496117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.496141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.496157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.496179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.496195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.496218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.496234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.496257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.496273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.498321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.498368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.498763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.498803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.498841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.498880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.498918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.498957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.498979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.499039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.499091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.499131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.499398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.499476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.499520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.499542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.499575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.500915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.572 [2024-07-25 19:05:05.500938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.500983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.501001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.501039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.501055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.501103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.501120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.501142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.572 [2024-07-25 19:05:05.501159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.572 [2024-07-25 19:05:05.501181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.501197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.501235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.501273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.501317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.501356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.501400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.501440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.501478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.501517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.501539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.501556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.502428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.502465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.502678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.502716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.502771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.502849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.502892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.502931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.502970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.502992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.503008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.503052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.503637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.503683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.503737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.503789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.573 [2024-07-25 19:05:05.503826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.503862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.503899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.503935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.503971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.503992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.504007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.573 [2024-07-25 19:05:05.504028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.573 [2024-07-25 19:05:05.504043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.504104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.504163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.504201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.504238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.504275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.504330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.504368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.504407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.504429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.504445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.505896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.505921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.505949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.505966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.505989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.506940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.506976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.506997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.574 [2024-07-25 19:05:05.507013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.507033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.507052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.507100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.507118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.509811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.509837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.509870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.509888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.509910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.509927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.509964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.509981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.510003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.510019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.510040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.510055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.510086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.510103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.510124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.510140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.574 [2024-07-25 19:05:05.510161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.574 [2024-07-25 19:05:05.510177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.510681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.510758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.510801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.510946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.510983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.511563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.511766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.511782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.513066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.513117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.513156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.513195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.513233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.513272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.513311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.513349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.575 [2024-07-25 19:05:05.513402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.575 [2024-07-25 19:05:05.513425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.575 [2024-07-25 19:05:05.513455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.513477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.513493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.513514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.513530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.513551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.513570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.514841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.514934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.514950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.515434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.515479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.515525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.515579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.576 [2024-07-25 19:05:05.515615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.515651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.515686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.515722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.515758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.576 [2024-07-25 19:05:05.515779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.576 [2024-07-25 19:05:05.515809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.576 Received shutdown signal, test time was about 32.558418 seconds 00:31:56.576 00:31:56.576 Latency(us) 00:31:56.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.576 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:56.576 Verification LBA range: start 0x0 length 0x4000 00:31:56.576 Nvme0n1 : 32.56 8060.36 31.49 0.00 0.00 15850.02 1856.85 4076242.11 00:31:56.576 =================================================================================================================== 00:31:56.576 Total : 8060.36 31.49 0.00 0.00 15850.02 1856.85 4076242.11 00:31:56.576 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.834 rmmod nvme_tcp 00:31:56.834 rmmod nvme_fabrics 00:31:56.834 rmmod nvme_keyring 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3658795 ']' 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3658795 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3658795 ']' 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3658795 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3658795 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3658795' 00:31:56.834 killing process with pid 3658795 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3658795 00:31:56.834 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3658795 00:31:57.092 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:57.092 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:57.092 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:57.092 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.092 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:57.092 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.092 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:57.092 19:05:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.626 19:05:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:59.626 00:31:59.626 real 0m40.779s 00:31:59.626 user 2m3.248s 00:31:59.626 sys 0m10.572s 00:31:59.626 19:05:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:59.626 19:05:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:59.626 ************************************ 00:31:59.626 END TEST nvmf_host_multipath_status 00:31:59.626 ************************************ 00:31:59.626 19:05:10 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:59.626 19:05:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:59.626 19:05:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:59.626 19:05:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.626 ************************************ 00:31:59.626 START TEST nvmf_discovery_remove_ifc 00:31:59.627 ************************************ 00:31:59.627 19:05:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:59.627 * Looking for test storage... 00:31:59.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:59.627 19:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:01.528 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:01.528 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:01.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:01.529 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:01.529 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:01.529 19:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:01.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:32:01.529 00:32:01.529 --- 10.0.0.2 ping statistics --- 00:32:01.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.529 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:32:01.529 00:32:01.529 --- 10.0.0.1 ping statistics --- 00:32:01.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.529 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3665149 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3665149 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3665149 ']' 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.529 [2024-07-25 19:05:13.117117] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:01.529 [2024-07-25 19:05:13.117189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.529 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.529 [2024-07-25 19:05:13.179537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.529 [2024-07-25 19:05:13.264479] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.529 [2024-07-25 19:05:13.264528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.529 [2024-07-25 19:05:13.264557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.529 [2024-07-25 19:05:13.264569] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.529 [2024-07-25 19:05:13.264579] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.529 [2024-07-25 19:05:13.264622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.529 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.530 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:01.530 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.530 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.530 [2024-07-25 19:05:13.402981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.788 [2024-07-25 19:05:13.411207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:01.788 null0 00:32:01.788 [2024-07-25 19:05:13.443143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3665181 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3665181 /tmp/host.sock 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3665181 ']' 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:01.788 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:01.788 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.788 [2024-07-25 19:05:13.509763] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:01.788 [2024-07-25 19:05:13.509834] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665181 ] 00:32:01.788 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.788 [2024-07-25 19:05:13.572271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.788 [2024-07-25 19:05:13.663070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.047 19:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.983 [2024-07-25 19:05:14.850519] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:02.983 [2024-07-25 19:05:14.850545] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:02.983 [2024-07-25 19:05:14.850570] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:03.241 [2024-07-25 19:05:14.976979] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:03.499 [2024-07-25 19:05:15.161812] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:03.499 [2024-07-25 19:05:15.161881] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:03.500 [2024-07-25 19:05:15.161927] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:03.500 [2024-07-25 19:05:15.161952] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:03.500 [2024-07-25 19:05:15.161977] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.500 [2024-07-25 19:05:15.169214] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1708900 was disconnected and freed. delete nvme_qpair. 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.500 19:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:04.435 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.435 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.436 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.436 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.436 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.436 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.436 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.695 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.695 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.695 19:05:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.628 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.629 19:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:06.563 19:05:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.945 19:05:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:08.882 19:05:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.883 [2024-07-25 19:05:20.603292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:08.883 [2024-07-25 19:05:20.603370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.883 [2024-07-25 19:05:20.603392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.883 [2024-07-25 19:05:20.603424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.883 [2024-07-25 19:05:20.603440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.883 [2024-07-25 19:05:20.603457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.883 [2024-07-25 19:05:20.603472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.883 [2024-07-25 19:05:20.603488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.883 [2024-07-25 19:05:20.603503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.883 [2024-07-25 19:05:20.603519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.883 [2024-07-25 19:05:20.603534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.883 [2024-07-25 19:05:20.603550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cf990 is same with the state(5) to be set 00:32:08.883 [2024-07-25 19:05:20.613311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cf990 (9): Bad file descriptor 00:32:08.883 [2024-07-25 19:05:20.623370] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.817 [2024-07-25 19:05:21.677098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:09.817 [2024-07-25 19:05:21.677159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cf990 with addr=10.0.0.2, port=4420 00:32:09.817 [2024-07-25 19:05:21.677184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cf990 is same with the state(5) to be set 00:32:09.817 [2024-07-25 19:05:21.677230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cf990 (9): Bad file descriptor 00:32:09.817 [2024-07-25 19:05:21.677659] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:09.817 [2024-07-25 19:05:21.677693] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:09.817 [2024-07-25 19:05:21.677719] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:09.817 [2024-07-25 19:05:21.677737] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:09.817 [2024-07-25 19:05:21.677769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:09.817 [2024-07-25 19:05:21.677789] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:09.817 19:05:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:11.193 [2024-07-25 19:05:22.680281] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.193 [2024-07-25 19:05:22.680316] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.193 [2024-07-25 19:05:22.680330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.193 [2024-07-25 19:05:22.680342] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:11.193 [2024-07-25 19:05:22.680361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.193 [2024-07-25 19:05:22.680407] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:11.193 [2024-07-25 19:05:22.680447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.193 [2024-07-25 19:05:22.680470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.193 [2024-07-25 19:05:22.680491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.193 [2024-07-25 19:05:22.680506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.193 [2024-07-25 19:05:22.680522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.193 [2024-07-25 19:05:22.680537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.193 [2024-07-25 19:05:22.680552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.193 [2024-07-25 19:05:22.680567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.193 [2024-07-25 19:05:22.680583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.193 [2024-07-25 19:05:22.680597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.193 [2024-07-25 19:05:22.680612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:11.193 [2024-07-25 19:05:22.680868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cede0 (9): Bad file descriptor 00:32:11.193 [2024-07-25 19:05:22.681888] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:11.193 [2024-07-25 19:05:22.681912] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:11.193 19:05:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:12.156 19:05:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:13.094 [2024-07-25 19:05:24.736701] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:13.094 [2024-07-25 19:05:24.736735] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:13.094 [2024-07-25 19:05:24.736755] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:13.094 [2024-07-25 19:05:24.864196] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:13.094 19:05:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:13.353 [2024-07-25 19:05:25.090639] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:13.353 [2024-07-25 19:05:25.090695] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:13.353 [2024-07-25 19:05:25.090733] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:13.353 [2024-07-25 19:05:25.090758] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:13.353 [2024-07-25 19:05:25.090773] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:13.353 [2024-07-25 19:05:25.095612] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16e9f60 was disconnected and freed. delete nvme_qpair. 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3665181 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3665181 ']' 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3665181 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3665181 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3665181' 00:32:14.288 killing process with pid 3665181 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3665181 00:32:14.288 19:05:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3665181 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:14.546 rmmod nvme_tcp 00:32:14.546 rmmod nvme_fabrics 00:32:14.546 rmmod nvme_keyring 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3665149 ']' 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3665149 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3665149 ']' 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3665149 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3665149 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3665149' 00:32:14.546 killing process with pid 3665149 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3665149 00:32:14.546 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3665149 00:32:14.805 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:14.805 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:14.805 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:14.805 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:14.805 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:14.805 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.805 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.805 19:05:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.715 19:05:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:16.715 00:32:16.715 real 0m17.573s 00:32:16.715 user 0m25.541s 00:32:16.715 sys 0m3.019s 00:32:16.715 19:05:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:16.715 19:05:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.715 ************************************ 00:32:16.715 END TEST nvmf_discovery_remove_ifc 00:32:16.715 ************************************ 00:32:16.715 19:05:28 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:16.715 19:05:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:16.715 19:05:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:16.715 19:05:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:16.715 ************************************ 00:32:16.715 START TEST nvmf_identify_kernel_target 00:32:16.715 ************************************ 00:32:16.715 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:16.974 * Looking for test storage... 00:32:16.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:16.974 19:05:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:18.877 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:18.877 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:18.878 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:18.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:18.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:18.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:32:18.878 00:32:18.878 --- 10.0.0.2 ping statistics --- 00:32:18.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.878 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:32:18.878 00:32:18.878 --- 10.0.0.1 ping statistics --- 00:32:18.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.878 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:18.878 19:05:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:20.254 Waiting for block devices as requested 00:32:20.254 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:20.254 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:20.254 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:20.511 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:20.511 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:20.511 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:20.511 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:20.511 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:20.770 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:20.770 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:20.770 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:20.770 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:21.028 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:21.028 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:21.028 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:21.028 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:21.285 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:21.285 No valid GPT data, bailing 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:21.285 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:21.544 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:21.544 00:32:21.544 Discovery Log Number of Records 2, Generation counter 2 00:32:21.544 =====Discovery Log Entry 0====== 00:32:21.544 trtype: tcp 00:32:21.544 adrfam: ipv4 00:32:21.544 subtype: current discovery subsystem 00:32:21.544 treq: not specified, sq flow control disable supported 00:32:21.544 portid: 1 00:32:21.544 trsvcid: 4420 00:32:21.544 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:21.544 traddr: 10.0.0.1 00:32:21.544 eflags: none 00:32:21.544 sectype: none 00:32:21.544 =====Discovery Log Entry 1====== 00:32:21.544 trtype: tcp 00:32:21.544 adrfam: ipv4 00:32:21.544 subtype: nvme subsystem 00:32:21.544 treq: not specified, sq flow control disable supported 00:32:21.544 portid: 1 00:32:21.544 trsvcid: 4420 00:32:21.544 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:21.544 traddr: 10.0.0.1 00:32:21.544 eflags: none 00:32:21.544 sectype: none 00:32:21.544 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:21.544 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:21.544 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.544 ===================================================== 00:32:21.544 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:21.544 ===================================================== 00:32:21.544 Controller Capabilities/Features 00:32:21.544 ================================ 00:32:21.544 Vendor ID: 0000 00:32:21.544 Subsystem Vendor ID: 0000 00:32:21.544 Serial Number: 54b2f4598404b5dede84 00:32:21.544 Model Number: Linux 00:32:21.544 Firmware Version: 6.7.0-68 00:32:21.544 Recommended Arb Burst: 0 00:32:21.544 IEEE OUI Identifier: 00 00 00 00:32:21.544 Multi-path I/O 00:32:21.544 May have multiple subsystem ports: No 00:32:21.544 May have multiple controllers: No 00:32:21.544 Associated with SR-IOV VF: No 00:32:21.544 Max Data Transfer Size: Unlimited 00:32:21.544 Max Number of Namespaces: 0 00:32:21.544 Max Number of I/O Queues: 1024 00:32:21.544 NVMe Specification Version (VS): 1.3 00:32:21.544 NVMe Specification Version (Identify): 1.3 00:32:21.544 Maximum Queue Entries: 1024 00:32:21.544 Contiguous Queues Required: No 00:32:21.544 Arbitration Mechanisms Supported 00:32:21.544 Weighted Round Robin: Not Supported 00:32:21.544 Vendor Specific: Not Supported 00:32:21.544 Reset Timeout: 7500 ms 00:32:21.544 Doorbell Stride: 4 bytes 00:32:21.544 NVM Subsystem Reset: Not Supported 00:32:21.544 Command Sets Supported 00:32:21.544 NVM Command Set: Supported 00:32:21.544 Boot Partition: Not Supported 00:32:21.544 Memory Page Size Minimum: 4096 bytes 00:32:21.544 Memory Page Size Maximum: 4096 bytes 00:32:21.544 Persistent Memory Region: Not Supported 00:32:21.544 Optional Asynchronous Events Supported 00:32:21.544 Namespace Attribute Notices: Not Supported 00:32:21.544 Firmware Activation Notices: Not Supported 00:32:21.544 ANA Change Notices: Not Supported 00:32:21.544 PLE Aggregate Log Change Notices: Not Supported 00:32:21.544 LBA Status Info Alert Notices: Not Supported 00:32:21.544 EGE Aggregate Log Change Notices: Not Supported 00:32:21.544 Normal NVM Subsystem Shutdown event: Not Supported 00:32:21.544 Zone Descriptor Change Notices: Not Supported 00:32:21.544 Discovery Log Change Notices: Supported 00:32:21.544 Controller Attributes 00:32:21.544 128-bit Host Identifier: Not Supported 00:32:21.544 Non-Operational Permissive Mode: Not Supported 00:32:21.544 NVM Sets: Not Supported 00:32:21.544 Read Recovery Levels: Not Supported 00:32:21.544 Endurance Groups: Not Supported 00:32:21.544 Predictable Latency Mode: Not Supported 00:32:21.544 Traffic Based Keep ALive: Not Supported 00:32:21.544 Namespace Granularity: Not Supported 00:32:21.544 SQ Associations: Not Supported 00:32:21.544 UUID List: Not Supported 00:32:21.544 Multi-Domain Subsystem: Not Supported 00:32:21.544 Fixed Capacity Management: Not Supported 00:32:21.544 Variable Capacity Management: Not Supported 00:32:21.544 Delete Endurance Group: Not Supported 00:32:21.544 Delete NVM Set: Not Supported 00:32:21.544 Extended LBA Formats Supported: Not Supported 00:32:21.544 Flexible Data Placement Supported: Not Supported 00:32:21.544 00:32:21.544 Controller Memory Buffer Support 00:32:21.544 ================================ 00:32:21.544 Supported: No 00:32:21.544 00:32:21.544 Persistent Memory Region Support 00:32:21.544 ================================ 00:32:21.544 Supported: No 00:32:21.544 00:32:21.544 Admin Command Set Attributes 00:32:21.544 ============================ 00:32:21.544 Security Send/Receive: Not Supported 00:32:21.544 Format NVM: Not Supported 00:32:21.544 Firmware Activate/Download: Not Supported 00:32:21.544 Namespace Management: Not Supported 00:32:21.544 Device Self-Test: Not Supported 00:32:21.544 Directives: Not Supported 00:32:21.544 NVMe-MI: Not Supported 00:32:21.544 Virtualization Management: Not Supported 00:32:21.544 Doorbell Buffer Config: Not Supported 00:32:21.544 Get LBA Status Capability: Not Supported 00:32:21.544 Command & Feature Lockdown Capability: Not Supported 00:32:21.544 Abort Command Limit: 1 00:32:21.544 Async Event Request Limit: 1 00:32:21.544 Number of Firmware Slots: N/A 00:32:21.544 Firmware Slot 1 Read-Only: N/A 00:32:21.544 Firmware Activation Without Reset: N/A 00:32:21.544 Multiple Update Detection Support: N/A 00:32:21.544 Firmware Update Granularity: No Information Provided 00:32:21.544 Per-Namespace SMART Log: No 00:32:21.544 Asymmetric Namespace Access Log Page: Not Supported 00:32:21.544 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:21.544 Command Effects Log Page: Not Supported 00:32:21.544 Get Log Page Extended Data: Supported 00:32:21.544 Telemetry Log Pages: Not Supported 00:32:21.544 Persistent Event Log Pages: Not Supported 00:32:21.544 Supported Log Pages Log Page: May Support 00:32:21.544 Commands Supported & Effects Log Page: Not Supported 00:32:21.544 Feature Identifiers & Effects Log Page:May Support 00:32:21.544 NVMe-MI Commands & Effects Log Page: May Support 00:32:21.544 Data Area 4 for Telemetry Log: Not Supported 00:32:21.544 Error Log Page Entries Supported: 1 00:32:21.544 Keep Alive: Not Supported 00:32:21.544 00:32:21.544 NVM Command Set Attributes 00:32:21.544 ========================== 00:32:21.544 Submission Queue Entry Size 00:32:21.544 Max: 1 00:32:21.544 Min: 1 00:32:21.544 Completion Queue Entry Size 00:32:21.544 Max: 1 00:32:21.544 Min: 1 00:32:21.544 Number of Namespaces: 0 00:32:21.544 Compare Command: Not Supported 00:32:21.544 Write Uncorrectable Command: Not Supported 00:32:21.544 Dataset Management Command: Not Supported 00:32:21.544 Write Zeroes Command: Not Supported 00:32:21.544 Set Features Save Field: Not Supported 00:32:21.544 Reservations: Not Supported 00:32:21.544 Timestamp: Not Supported 00:32:21.544 Copy: Not Supported 00:32:21.544 Volatile Write Cache: Not Present 00:32:21.544 Atomic Write Unit (Normal): 1 00:32:21.544 Atomic Write Unit (PFail): 1 00:32:21.544 Atomic Compare & Write Unit: 1 00:32:21.544 Fused Compare & Write: Not Supported 00:32:21.544 Scatter-Gather List 00:32:21.544 SGL Command Set: Supported 00:32:21.544 SGL Keyed: Not Supported 00:32:21.544 SGL Bit Bucket Descriptor: Not Supported 00:32:21.544 SGL Metadata Pointer: Not Supported 00:32:21.544 Oversized SGL: Not Supported 00:32:21.544 SGL Metadata Address: Not Supported 00:32:21.544 SGL Offset: Supported 00:32:21.544 Transport SGL Data Block: Not Supported 00:32:21.544 Replay Protected Memory Block: Not Supported 00:32:21.544 00:32:21.544 Firmware Slot Information 00:32:21.544 ========================= 00:32:21.544 Active slot: 0 00:32:21.544 00:32:21.544 00:32:21.544 Error Log 00:32:21.544 ========= 00:32:21.544 00:32:21.544 Active Namespaces 00:32:21.544 ================= 00:32:21.544 Discovery Log Page 00:32:21.544 ================== 00:32:21.544 Generation Counter: 2 00:32:21.544 Number of Records: 2 00:32:21.544 Record Format: 0 00:32:21.545 00:32:21.545 Discovery Log Entry 0 00:32:21.545 ---------------------- 00:32:21.545 Transport Type: 3 (TCP) 00:32:21.545 Address Family: 1 (IPv4) 00:32:21.545 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:21.545 Entry Flags: 00:32:21.545 Duplicate Returned Information: 0 00:32:21.545 Explicit Persistent Connection Support for Discovery: 0 00:32:21.545 Transport Requirements: 00:32:21.545 Secure Channel: Not Specified 00:32:21.545 Port ID: 1 (0x0001) 00:32:21.545 Controller ID: 65535 (0xffff) 00:32:21.545 Admin Max SQ Size: 32 00:32:21.545 Transport Service Identifier: 4420 00:32:21.545 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:21.545 Transport Address: 10.0.0.1 00:32:21.545 Discovery Log Entry 1 00:32:21.545 ---------------------- 00:32:21.545 Transport Type: 3 (TCP) 00:32:21.545 Address Family: 1 (IPv4) 00:32:21.545 Subsystem Type: 2 (NVM Subsystem) 00:32:21.545 Entry Flags: 00:32:21.545 Duplicate Returned Information: 0 00:32:21.545 Explicit Persistent Connection Support for Discovery: 0 00:32:21.545 Transport Requirements: 00:32:21.545 Secure Channel: Not Specified 00:32:21.545 Port ID: 1 (0x0001) 00:32:21.545 Controller ID: 65535 (0xffff) 00:32:21.545 Admin Max SQ Size: 32 00:32:21.545 Transport Service Identifier: 4420 00:32:21.545 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:21.545 Transport Address: 10.0.0.1 00:32:21.545 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:21.545 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.545 get_feature(0x01) failed 00:32:21.545 get_feature(0x02) failed 00:32:21.545 get_feature(0x04) failed 00:32:21.545 ===================================================== 00:32:21.545 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:21.545 ===================================================== 00:32:21.545 Controller Capabilities/Features 00:32:21.545 ================================ 00:32:21.545 Vendor ID: 0000 00:32:21.545 Subsystem Vendor ID: 0000 00:32:21.545 Serial Number: 06b41cf81bd1ba666dd8 00:32:21.545 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:21.545 Firmware Version: 6.7.0-68 00:32:21.545 Recommended Arb Burst: 6 00:32:21.545 IEEE OUI Identifier: 00 00 00 00:32:21.545 Multi-path I/O 00:32:21.545 May have multiple subsystem ports: Yes 00:32:21.545 May have multiple controllers: Yes 00:32:21.545 Associated with SR-IOV VF: No 00:32:21.545 Max Data Transfer Size: Unlimited 00:32:21.545 Max Number of Namespaces: 1024 00:32:21.545 Max Number of I/O Queues: 128 00:32:21.545 NVMe Specification Version (VS): 1.3 00:32:21.545 NVMe Specification Version (Identify): 1.3 00:32:21.545 Maximum Queue Entries: 1024 00:32:21.545 Contiguous Queues Required: No 00:32:21.545 Arbitration Mechanisms Supported 00:32:21.545 Weighted Round Robin: Not Supported 00:32:21.545 Vendor Specific: Not Supported 00:32:21.545 Reset Timeout: 7500 ms 00:32:21.545 Doorbell Stride: 4 bytes 00:32:21.545 NVM Subsystem Reset: Not Supported 00:32:21.545 Command Sets Supported 00:32:21.545 NVM Command Set: Supported 00:32:21.545 Boot Partition: Not Supported 00:32:21.545 Memory Page Size Minimum: 4096 bytes 00:32:21.545 Memory Page Size Maximum: 4096 bytes 00:32:21.545 Persistent Memory Region: Not Supported 00:32:21.545 Optional Asynchronous Events Supported 00:32:21.545 Namespace Attribute Notices: Supported 00:32:21.545 Firmware Activation Notices: Not Supported 00:32:21.545 ANA Change Notices: Supported 00:32:21.545 PLE Aggregate Log Change Notices: Not Supported 00:32:21.545 LBA Status Info Alert Notices: Not Supported 00:32:21.545 EGE Aggregate Log Change Notices: Not Supported 00:32:21.545 Normal NVM Subsystem Shutdown event: Not Supported 00:32:21.545 Zone Descriptor Change Notices: Not Supported 00:32:21.545 Discovery Log Change Notices: Not Supported 00:32:21.545 Controller Attributes 00:32:21.545 128-bit Host Identifier: Supported 00:32:21.545 Non-Operational Permissive Mode: Not Supported 00:32:21.545 NVM Sets: Not Supported 00:32:21.545 Read Recovery Levels: Not Supported 00:32:21.545 Endurance Groups: Not Supported 00:32:21.545 Predictable Latency Mode: Not Supported 00:32:21.545 Traffic Based Keep ALive: Supported 00:32:21.545 Namespace Granularity: Not Supported 00:32:21.545 SQ Associations: Not Supported 00:32:21.545 UUID List: Not Supported 00:32:21.545 Multi-Domain Subsystem: Not Supported 00:32:21.545 Fixed Capacity Management: Not Supported 00:32:21.545 Variable Capacity Management: Not Supported 00:32:21.545 Delete Endurance Group: Not Supported 00:32:21.545 Delete NVM Set: Not Supported 00:32:21.545 Extended LBA Formats Supported: Not Supported 00:32:21.545 Flexible Data Placement Supported: Not Supported 00:32:21.545 00:32:21.545 Controller Memory Buffer Support 00:32:21.545 ================================ 00:32:21.545 Supported: No 00:32:21.545 00:32:21.545 Persistent Memory Region Support 00:32:21.545 ================================ 00:32:21.545 Supported: No 00:32:21.545 00:32:21.545 Admin Command Set Attributes 00:32:21.545 ============================ 00:32:21.545 Security Send/Receive: Not Supported 00:32:21.545 Format NVM: Not Supported 00:32:21.545 Firmware Activate/Download: Not Supported 00:32:21.545 Namespace Management: Not Supported 00:32:21.545 Device Self-Test: Not Supported 00:32:21.545 Directives: Not Supported 00:32:21.545 NVMe-MI: Not Supported 00:32:21.545 Virtualization Management: Not Supported 00:32:21.545 Doorbell Buffer Config: Not Supported 00:32:21.545 Get LBA Status Capability: Not Supported 00:32:21.545 Command & Feature Lockdown Capability: Not Supported 00:32:21.545 Abort Command Limit: 4 00:32:21.545 Async Event Request Limit: 4 00:32:21.545 Number of Firmware Slots: N/A 00:32:21.545 Firmware Slot 1 Read-Only: N/A 00:32:21.545 Firmware Activation Without Reset: N/A 00:32:21.545 Multiple Update Detection Support: N/A 00:32:21.545 Firmware Update Granularity: No Information Provided 00:32:21.545 Per-Namespace SMART Log: Yes 00:32:21.545 Asymmetric Namespace Access Log Page: Supported 00:32:21.545 ANA Transition Time : 10 sec 00:32:21.545 00:32:21.545 Asymmetric Namespace Access Capabilities 00:32:21.545 ANA Optimized State : Supported 00:32:21.545 ANA Non-Optimized State : Supported 00:32:21.545 ANA Inaccessible State : Supported 00:32:21.545 ANA Persistent Loss State : Supported 00:32:21.545 ANA Change State : Supported 00:32:21.545 ANAGRPID is not changed : No 00:32:21.545 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:21.545 00:32:21.545 ANA Group Identifier Maximum : 128 00:32:21.545 Number of ANA Group Identifiers : 128 00:32:21.545 Max Number of Allowed Namespaces : 1024 00:32:21.545 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:21.545 Command Effects Log Page: Supported 00:32:21.545 Get Log Page Extended Data: Supported 00:32:21.545 Telemetry Log Pages: Not Supported 00:32:21.545 Persistent Event Log Pages: Not Supported 00:32:21.545 Supported Log Pages Log Page: May Support 00:32:21.545 Commands Supported & Effects Log Page: Not Supported 00:32:21.545 Feature Identifiers & Effects Log Page:May Support 00:32:21.545 NVMe-MI Commands & Effects Log Page: May Support 00:32:21.545 Data Area 4 for Telemetry Log: Not Supported 00:32:21.545 Error Log Page Entries Supported: 128 00:32:21.545 Keep Alive: Supported 00:32:21.545 Keep Alive Granularity: 1000 ms 00:32:21.545 00:32:21.545 NVM Command Set Attributes 00:32:21.545 ========================== 00:32:21.545 Submission Queue Entry Size 00:32:21.545 Max: 64 00:32:21.545 Min: 64 00:32:21.545 Completion Queue Entry Size 00:32:21.545 Max: 16 00:32:21.545 Min: 16 00:32:21.545 Number of Namespaces: 1024 00:32:21.545 Compare Command: Not Supported 00:32:21.545 Write Uncorrectable Command: Not Supported 00:32:21.545 Dataset Management Command: Supported 00:32:21.545 Write Zeroes Command: Supported 00:32:21.545 Set Features Save Field: Not Supported 00:32:21.545 Reservations: Not Supported 00:32:21.545 Timestamp: Not Supported 00:32:21.545 Copy: Not Supported 00:32:21.545 Volatile Write Cache: Present 00:32:21.545 Atomic Write Unit (Normal): 1 00:32:21.545 Atomic Write Unit (PFail): 1 00:32:21.545 Atomic Compare & Write Unit: 1 00:32:21.545 Fused Compare & Write: Not Supported 00:32:21.545 Scatter-Gather List 00:32:21.545 SGL Command Set: Supported 00:32:21.545 SGL Keyed: Not Supported 00:32:21.545 SGL Bit Bucket Descriptor: Not Supported 00:32:21.545 SGL Metadata Pointer: Not Supported 00:32:21.545 Oversized SGL: Not Supported 00:32:21.545 SGL Metadata Address: Not Supported 00:32:21.545 SGL Offset: Supported 00:32:21.545 Transport SGL Data Block: Not Supported 00:32:21.545 Replay Protected Memory Block: Not Supported 00:32:21.545 00:32:21.545 Firmware Slot Information 00:32:21.545 ========================= 00:32:21.545 Active slot: 0 00:32:21.545 00:32:21.545 Asymmetric Namespace Access 00:32:21.545 =========================== 00:32:21.545 Change Count : 0 00:32:21.545 Number of ANA Group Descriptors : 1 00:32:21.545 ANA Group Descriptor : 0 00:32:21.545 ANA Group ID : 1 00:32:21.545 Number of NSID Values : 1 00:32:21.545 Change Count : 0 00:32:21.545 ANA State : 1 00:32:21.545 Namespace Identifier : 1 00:32:21.545 00:32:21.545 Commands Supported and Effects 00:32:21.545 ============================== 00:32:21.545 Admin Commands 00:32:21.545 -------------- 00:32:21.545 Get Log Page (02h): Supported 00:32:21.545 Identify (06h): Supported 00:32:21.545 Abort (08h): Supported 00:32:21.545 Set Features (09h): Supported 00:32:21.545 Get Features (0Ah): Supported 00:32:21.545 Asynchronous Event Request (0Ch): Supported 00:32:21.545 Keep Alive (18h): Supported 00:32:21.545 I/O Commands 00:32:21.545 ------------ 00:32:21.545 Flush (00h): Supported 00:32:21.545 Write (01h): Supported LBA-Change 00:32:21.545 Read (02h): Supported 00:32:21.545 Write Zeroes (08h): Supported LBA-Change 00:32:21.545 Dataset Management (09h): Supported 00:32:21.545 00:32:21.545 Error Log 00:32:21.545 ========= 00:32:21.545 Entry: 0 00:32:21.545 Error Count: 0x3 00:32:21.545 Submission Queue Id: 0x0 00:32:21.545 Command Id: 0x5 00:32:21.545 Phase Bit: 0 00:32:21.545 Status Code: 0x2 00:32:21.545 Status Code Type: 0x0 00:32:21.545 Do Not Retry: 1 00:32:21.545 Error Location: 0x28 00:32:21.545 LBA: 0x0 00:32:21.545 Namespace: 0x0 00:32:21.545 Vendor Log Page: 0x0 00:32:21.545 ----------- 00:32:21.545 Entry: 1 00:32:21.545 Error Count: 0x2 00:32:21.545 Submission Queue Id: 0x0 00:32:21.545 Command Id: 0x5 00:32:21.545 Phase Bit: 0 00:32:21.545 Status Code: 0x2 00:32:21.545 Status Code Type: 0x0 00:32:21.545 Do Not Retry: 1 00:32:21.545 Error Location: 0x28 00:32:21.546 LBA: 0x0 00:32:21.546 Namespace: 0x0 00:32:21.546 Vendor Log Page: 0x0 00:32:21.546 ----------- 00:32:21.546 Entry: 2 00:32:21.546 Error Count: 0x1 00:32:21.546 Submission Queue Id: 0x0 00:32:21.546 Command Id: 0x4 00:32:21.546 Phase Bit: 0 00:32:21.546 Status Code: 0x2 00:32:21.546 Status Code Type: 0x0 00:32:21.546 Do Not Retry: 1 00:32:21.546 Error Location: 0x28 00:32:21.546 LBA: 0x0 00:32:21.546 Namespace: 0x0 00:32:21.546 Vendor Log Page: 0x0 00:32:21.546 00:32:21.546 Number of Queues 00:32:21.546 ================ 00:32:21.546 Number of I/O Submission Queues: 128 00:32:21.546 Number of I/O Completion Queues: 128 00:32:21.546 00:32:21.546 ZNS Specific Controller Data 00:32:21.546 ============================ 00:32:21.546 Zone Append Size Limit: 0 00:32:21.546 00:32:21.546 00:32:21.546 Active Namespaces 00:32:21.546 ================= 00:32:21.546 get_feature(0x05) failed 00:32:21.546 Namespace ID:1 00:32:21.546 Command Set Identifier: NVM (00h) 00:32:21.546 Deallocate: Supported 00:32:21.546 Deallocated/Unwritten Error: Not Supported 00:32:21.546 Deallocated Read Value: Unknown 00:32:21.546 Deallocate in Write Zeroes: Not Supported 00:32:21.546 Deallocated Guard Field: 0xFFFF 00:32:21.546 Flush: Supported 00:32:21.546 Reservation: Not Supported 00:32:21.546 Namespace Sharing Capabilities: Multiple Controllers 00:32:21.546 Size (in LBAs): 1953525168 (931GiB) 00:32:21.546 Capacity (in LBAs): 1953525168 (931GiB) 00:32:21.546 Utilization (in LBAs): 1953525168 (931GiB) 00:32:21.546 UUID: 3b191cb3-05d3-4dbe-9c2d-e08aa7749eaa 00:32:21.546 Thin Provisioning: Not Supported 00:32:21.546 Per-NS Atomic Units: Yes 00:32:21.546 Atomic Boundary Size (Normal): 0 00:32:21.546 Atomic Boundary Size (PFail): 0 00:32:21.546 Atomic Boundary Offset: 0 00:32:21.546 NGUID/EUI64 Never Reused: No 00:32:21.546 ANA group ID: 1 00:32:21.546 Namespace Write Protected: No 00:32:21.546 Number of LBA Formats: 1 00:32:21.546 Current LBA Format: LBA Format #00 00:32:21.546 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:21.546 00:32:21.546 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:21.546 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:21.546 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:21.546 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:21.546 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:21.546 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:21.546 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:21.546 rmmod nvme_tcp 00:32:21.805 rmmod nvme_fabrics 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.805 19:05:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:23.709 19:05:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:25.083 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:25.084 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:25.084 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:25.084 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:25.084 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:25.084 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:25.084 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:25.084 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:25.084 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:25.084 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:25.084 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:25.084 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:25.084 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:25.084 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:25.084 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:25.084 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:26.020 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:26.020 00:32:26.020 real 0m9.169s 00:32:26.020 user 0m1.953s 00:32:26.020 sys 0m3.234s 00:32:26.020 19:05:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:26.020 19:05:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.020 ************************************ 00:32:26.020 END TEST nvmf_identify_kernel_target 00:32:26.020 ************************************ 00:32:26.020 19:05:37 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:26.020 19:05:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:26.020 19:05:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:26.020 19:05:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:26.020 ************************************ 00:32:26.020 START TEST nvmf_auth_host 00:32:26.020 ************************************ 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:26.020 * Looking for test storage... 00:32:26.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.020 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:26.021 19:05:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.926 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.187 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:28.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:28.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:28.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:28.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:28.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:32:28.188 00:32:28.188 --- 10.0.0.2 ping statistics --- 00:32:28.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.188 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:32:28.188 00:32:28.188 --- 10.0.0.1 ping statistics --- 00:32:28.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.188 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3672302 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3672302 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3672302 ']' 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:28.188 19:05:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:28.448 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ac304eca7b1409ea47ebcddc07d6f0da 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3V1 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ac304eca7b1409ea47ebcddc07d6f0da 0 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ac304eca7b1409ea47ebcddc07d6f0da 0 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ac304eca7b1409ea47ebcddc07d6f0da 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3V1 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3V1 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3V1 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1d45622f4bc3ac8c0a18be967241ef4ef982b29ce12a6e2c701d82ac5f1418a2 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Vmz 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1d45622f4bc3ac8c0a18be967241ef4ef982b29ce12a6e2c701d82ac5f1418a2 3 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1d45622f4bc3ac8c0a18be967241ef4ef982b29ce12a6e2c701d82ac5f1418a2 3 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1d45622f4bc3ac8c0a18be967241ef4ef982b29ce12a6e2c701d82ac5f1418a2 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Vmz 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Vmz 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Vmz 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ebf4d4ae87b846c7d3e5cd76e6f76f7f1b20b132a5162355 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ANt 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ebf4d4ae87b846c7d3e5cd76e6f76f7f1b20b132a5162355 0 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ebf4d4ae87b846c7d3e5cd76e6f76f7f1b20b132a5162355 0 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ebf4d4ae87b846c7d3e5cd76e6f76f7f1b20b132a5162355 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ANt 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ANt 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ANt 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ffd01f1dbb843eec6b1ab3a0a0fa8e871249af7eb47ad82 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8Ob 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ffd01f1dbb843eec6b1ab3a0a0fa8e871249af7eb47ad82 2 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ffd01f1dbb843eec6b1ab3a0a0fa8e871249af7eb47ad82 2 00:32:28.705 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ffd01f1dbb843eec6b1ab3a0a0fa8e871249af7eb47ad82 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8Ob 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8Ob 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.8Ob 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=564e0701cd53b69fab825a3aeb75343a 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.EBl 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 564e0701cd53b69fab825a3aeb75343a 1 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 564e0701cd53b69fab825a3aeb75343a 1 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=564e0701cd53b69fab825a3aeb75343a 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:28.706 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.EBl 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.EBl 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EBl 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1a347d39f5bf486d03de91aaab4c097a 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yp7 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1a347d39f5bf486d03de91aaab4c097a 1 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1a347d39f5bf486d03de91aaab4c097a 1 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1a347d39f5bf486d03de91aaab4c097a 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yp7 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yp7 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.yp7 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=73fa69f3b2454bfb9a234e45fa8d089f8361a19dfc3de9f3 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4mW 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 73fa69f3b2454bfb9a234e45fa8d089f8361a19dfc3de9f3 2 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 73fa69f3b2454bfb9a234e45fa8d089f8361a19dfc3de9f3 2 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=73fa69f3b2454bfb9a234e45fa8d089f8361a19dfc3de9f3 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4mW 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4mW 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4mW 00:32:28.964 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eba48af1d61697bc79185d2f78d5d313 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.L1E 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eba48af1d61697bc79185d2f78d5d313 0 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eba48af1d61697bc79185d2f78d5d313 0 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eba48af1d61697bc79185d2f78d5d313 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.L1E 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.L1E 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.L1E 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5b02eda2959a4c49a5cdda2bd71815d622d38a3cdd09e6b043b13fe2964a4c23 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6kh 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5b02eda2959a4c49a5cdda2bd71815d622d38a3cdd09e6b043b13fe2964a4c23 3 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5b02eda2959a4c49a5cdda2bd71815d622d38a3cdd09e6b043b13fe2964a4c23 3 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5b02eda2959a4c49a5cdda2bd71815d622d38a3cdd09e6b043b13fe2964a4c23 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6kh 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6kh 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6kh 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3672302 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3672302 ']' 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:28.965 19:05:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3V1 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Vmz ]] 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vmz 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.532 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ANt 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.8Ob ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8Ob 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EBl 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.yp7 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yp7 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4mW 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.L1E ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.L1E 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6kh 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:29.533 19:05:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:30.467 Waiting for block devices as requested 00:32:30.467 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:30.726 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:30.726 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:30.984 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:30.984 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:30.985 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:31.246 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:31.246 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:31.246 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:31.246 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:31.246 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:31.540 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:31.540 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:31.540 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:31.540 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:31.797 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:31.797 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:32.055 No valid GPT data, bailing 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:32.055 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:32.313 00:32:32.313 Discovery Log Number of Records 2, Generation counter 2 00:32:32.313 =====Discovery Log Entry 0====== 00:32:32.313 trtype: tcp 00:32:32.313 adrfam: ipv4 00:32:32.313 subtype: current discovery subsystem 00:32:32.313 treq: not specified, sq flow control disable supported 00:32:32.313 portid: 1 00:32:32.313 trsvcid: 4420 00:32:32.313 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:32.313 traddr: 10.0.0.1 00:32:32.313 eflags: none 00:32:32.313 sectype: none 00:32:32.313 =====Discovery Log Entry 1====== 00:32:32.313 trtype: tcp 00:32:32.313 adrfam: ipv4 00:32:32.313 subtype: nvme subsystem 00:32:32.313 treq: not specified, sq flow control disable supported 00:32:32.313 portid: 1 00:32:32.313 trsvcid: 4420 00:32:32.313 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:32.313 traddr: 10.0.0.1 00:32:32.313 eflags: none 00:32:32.313 sectype: none 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:32.313 19:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.313 nvme0n1 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.313 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.572 nvme0n1 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.572 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.833 nvme0n1 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.833 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.094 nvme0n1 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.094 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.095 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.095 19:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.095 19:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.095 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.095 19:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.355 nvme0n1 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:33.355 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.356 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.617 nvme0n1 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.617 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.878 nvme0n1 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.878 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.138 nvme0n1 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.138 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.139 19:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.399 nvme0n1 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.399 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.400 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.660 nvme0n1 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.660 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.919 nvme0n1 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.919 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.179 nvme0n1 00:32:35.179 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.179 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.179 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.179 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.179 19:05:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.179 19:05:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.179 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.438 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.696 nvme0n1 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.696 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.955 nvme0n1 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.955 19:05:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.214 nvme0n1 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.214 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.473 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.473 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.473 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:36.473 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.473 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.473 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.474 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.734 nvme0n1 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.734 19:05:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.304 nvme0n1 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.304 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.874 nvme0n1 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.874 19:05:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.443 nvme0n1 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.443 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.703 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.703 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.703 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.703 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.703 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.703 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.704 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.704 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.704 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.704 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.704 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.704 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.273 nvme0n1 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.273 19:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.842 nvme0n1 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:39.842 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.843 19:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.783 nvme0n1 00:32:40.783 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.783 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.783 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.783 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.783 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.783 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.783 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.784 19:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.721 nvme0n1 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.721 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.979 19:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.913 nvme0n1 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.913 19:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.851 nvme0n1 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.852 19:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.792 nvme0n1 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.792 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.052 nvme0n1 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.052 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.053 19:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.312 nvme0n1 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.312 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.572 nvme0n1 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.572 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.573 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.831 nvme0n1 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.831 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.832 nvme0n1 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.832 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.090 19:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.091 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.091 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.091 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.091 nvme0n1 00:32:46.091 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.091 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.091 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.091 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.091 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.349 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.349 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.349 19:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.349 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.349 19:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.349 nvme0n1 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.349 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.608 nvme0n1 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.608 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.895 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.895 nvme0n1 00:32:46.896 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.896 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.896 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.896 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.896 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.896 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.153 nvme0n1 00:32:47.153 19:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.153 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.154 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.154 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.154 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.154 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:47.413 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.414 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.673 nvme0n1 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:47.673 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.674 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.932 nvme0n1 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.932 19:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.501 nvme0n1 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.501 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.761 nvme0n1 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.761 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.020 nvme0n1 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.020 19:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.590 nvme0n1 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.590 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.591 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.158 nvme0n1 00:32:50.158 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.158 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.158 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.158 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.158 19:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.158 19:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.158 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.417 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.987 nvme0n1 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.987 19:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.557 nvme0n1 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.557 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.126 nvme0n1 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.126 19:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.063 nvme0n1 00:32:53.063 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.063 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.063 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.063 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.063 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.063 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.321 19:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.307 nvme0n1 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.307 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.308 19:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.246 nvme0n1 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.246 19:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.184 nvme0n1 00:32:56.184 19:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.184 19:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.184 19:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.184 19:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.184 19:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.184 19:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.184 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.185 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.121 nvme0n1 00:32:57.121 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.121 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.121 19:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.121 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.121 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.121 19:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.380 nvme0n1 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.380 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.641 nvme0n1 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.641 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.902 nvme0n1 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.902 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.160 nvme0n1 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.160 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.161 19:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.419 nvme0n1 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.419 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.679 nvme0n1 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.679 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.938 nvme0n1 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.938 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.197 nvme0n1 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.197 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.198 19:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.458 nvme0n1 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.458 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.459 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.719 nvme0n1 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:59.719 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.720 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.979 nvme0n1 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.979 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.240 19:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.499 nvme0n1 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.499 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.500 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.759 nvme0n1 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.759 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.760 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.330 nvme0n1 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.330 19:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.590 nvme0n1 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.590 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.591 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.159 nvme0n1 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:33:02.159 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.160 19:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.726 nvme0n1 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.726 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.989 19:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.581 nvme0n1 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.581 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.146 nvme0n1 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.146 19:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.711 nvme0n1 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMzMDRlY2E3YjE0MDllYTQ3ZWJjZGRjMDdkNmYwZGGaX1AS: 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ0NTYyMmY0YmMzYWM4YzBhMThiZTk2NzI0MWVmNGVmOTgyYjI5Y2UxMmE2ZTJjNzAxZDgyYWM1ZjE0MThhMkZpHCY=: 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.711 19:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.712 19:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.712 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.712 19:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.650 nvme0n1 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.650 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.908 19:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.844 nvme0n1 00:33:06.844 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.844 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTY0ZTA3MDFjZDUzYjY5ZmFiODI1YTNhZWI3NTM0M2EbXNsE: 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: ]] 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEzNDdkMzlmNWJmNDg2ZDAzZGU5MWFhYWI0YzA5N2GUy8qI: 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.845 19:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 nvme0n1 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNmYTY5ZjNiMjQ1NGJmYjlhMjM0ZTQ1ZmE4ZDA4OWY4MzYxYTE5ZGZjM2RlOWYz11tHvg==: 00:33:07.784 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: ]] 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWJhNDhhZjFkNjE2OTdiYzc5MTg1ZDJmNzhkNWQzMTPeU7Zj: 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.785 19:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.718 nvme0n1 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIwMmVkYTI5NTlhNGM0OWE1Y2RkYTJiZDcxODE1ZDYyMmQzOGEzY2RkMDllNmIwNDNiMTNmZTI5NjRhNGMyMyPl8I8=: 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:08.718 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.719 19:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.694 nvme0n1 00:33:09.694 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.694 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.694 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.694 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.694 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.694 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.953 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.953 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.953 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.953 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.953 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJmNGQ0YWU4N2I4NDZjN2QzZTVjZDc2ZTZmNzZmN2YxYjIwYjEzMmE1MTYyMzU1QsyHOg==: 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmZmZDAxZjFkYmI4NDNlZWM2YjFhYjNhMGEwZmE4ZTg3MTI0OWFmN2ViNDdhZDgyNWjwAg==: 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.954 request: 00:33:09.954 { 00:33:09.954 "name": "nvme0", 00:33:09.954 "trtype": "tcp", 00:33:09.954 "traddr": "10.0.0.1", 00:33:09.954 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:09.954 "adrfam": "ipv4", 00:33:09.954 "trsvcid": "4420", 00:33:09.954 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:09.954 "method": "bdev_nvme_attach_controller", 00:33:09.954 "req_id": 1 00:33:09.954 } 00:33:09.954 Got JSON-RPC error response 00:33:09.954 response: 00:33:09.954 { 00:33:09.954 "code": -5, 00:33:09.954 "message": "Input/output error" 00:33:09.954 } 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.954 request: 00:33:09.954 { 00:33:09.954 "name": "nvme0", 00:33:09.954 "trtype": "tcp", 00:33:09.954 "traddr": "10.0.0.1", 00:33:09.954 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:09.954 "adrfam": "ipv4", 00:33:09.954 "trsvcid": "4420", 00:33:09.954 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:09.954 "dhchap_key": "key2", 00:33:09.954 "method": "bdev_nvme_attach_controller", 00:33:09.954 "req_id": 1 00:33:09.954 } 00:33:09.954 Got JSON-RPC error response 00:33:09.954 response: 00:33:09.954 { 00:33:09.954 "code": -5, 00:33:09.954 "message": "Input/output error" 00:33:09.954 } 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.954 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.213 request: 00:33:10.213 { 00:33:10.213 "name": "nvme0", 00:33:10.213 "trtype": "tcp", 00:33:10.213 "traddr": "10.0.0.1", 00:33:10.213 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:10.213 "adrfam": "ipv4", 00:33:10.213 "trsvcid": "4420", 00:33:10.213 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:10.213 "dhchap_key": "key1", 00:33:10.213 "dhchap_ctrlr_key": "ckey2", 00:33:10.213 "method": "bdev_nvme_attach_controller", 00:33:10.213 "req_id": 1 00:33:10.213 } 00:33:10.213 Got JSON-RPC error response 00:33:10.213 response: 00:33:10.213 { 00:33:10.213 "code": -5, 00:33:10.213 "message": "Input/output error" 00:33:10.213 } 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:10.213 rmmod nvme_tcp 00:33:10.213 rmmod nvme_fabrics 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3672302 ']' 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3672302 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3672302 ']' 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3672302 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3672302 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3672302' 00:33:10.213 killing process with pid 3672302 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3672302 00:33:10.213 19:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3672302 00:33:10.471 19:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:10.471 19:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:10.471 19:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:10.471 19:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:10.471 19:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:10.471 19:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.471 19:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:10.471 19:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.376 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:12.376 19:06:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:12.376 19:06:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:12.376 19:06:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:12.376 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:12.376 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:12.635 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:12.635 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:12.635 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:12.635 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:12.635 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:12.635 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:12.635 19:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:13.570 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:13.570 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:13.570 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:13.570 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:13.570 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:13.570 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:13.570 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:13.570 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:13.830 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:13.830 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:13.830 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:13.830 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:13.830 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:13.830 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:13.830 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:13.830 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:14.769 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:14.769 19:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3V1 /tmp/spdk.key-null.ANt /tmp/spdk.key-sha256.EBl /tmp/spdk.key-sha384.4mW /tmp/spdk.key-sha512.6kh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:14.769 19:06:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:15.703 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:15.703 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:15.703 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:15.703 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:15.703 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:15.703 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:15.703 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:15.703 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:15.703 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:15.703 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:15.703 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:15.703 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:15.703 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:15.962 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:15.962 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:15.962 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:15.962 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:15.962 00:33:15.962 real 0m49.963s 00:33:15.962 user 0m47.871s 00:33:15.962 sys 0m5.629s 00:33:15.962 19:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:15.962 19:06:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.962 ************************************ 00:33:15.962 END TEST nvmf_auth_host 00:33:15.962 ************************************ 00:33:15.962 19:06:27 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:15.962 19:06:27 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:15.962 19:06:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:15.962 19:06:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:15.962 19:06:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:15.962 ************************************ 00:33:15.962 START TEST nvmf_digest 00:33:15.962 ************************************ 00:33:15.962 19:06:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:16.220 * Looking for test storage... 00:33:16.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.220 19:06:27 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:16.221 19:06:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:18.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:18.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.122 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:18.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:18.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.123 19:06:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:18.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:33:18.382 00:33:18.382 --- 10.0.0.2 ping statistics --- 00:33:18.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.382 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:33:18.382 00:33:18.382 --- 10.0.0.1 ping statistics --- 00:33:18.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.382 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:18.382 ************************************ 00:33:18.382 START TEST nvmf_digest_clean 00:33:18.382 ************************************ 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3682407 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3682407 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3682407 ']' 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:18.382 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.382 [2024-07-25 19:06:30.214235] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:18.382 [2024-07-25 19:06:30.214329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.382 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.642 [2024-07-25 19:06:30.285248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.642 [2024-07-25 19:06:30.376755] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.642 [2024-07-25 19:06:30.376819] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.642 [2024-07-25 19:06:30.376846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.642 [2024-07-25 19:06:30.376861] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.642 [2024-07-25 19:06:30.376873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.642 [2024-07-25 19:06:30.376905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.642 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.900 null0 00:33:18.901 [2024-07-25 19:06:30.560741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.901 [2024-07-25 19:06:30.584992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3682426 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3682426 /var/tmp/bperf.sock 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3682426 ']' 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:18.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:18.901 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.901 [2024-07-25 19:06:30.634288] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:18.901 [2024-07-25 19:06:30.634367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682426 ] 00:33:18.901 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.901 [2024-07-25 19:06:30.699319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.157 [2024-07-25 19:06:30.793991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.157 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:19.157 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:19.157 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:19.157 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:19.157 19:06:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:19.415 19:06:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.415 19:06:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.673 nvme0n1 00:33:19.673 19:06:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:19.673 19:06:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.931 Running I/O for 2 seconds... 00:33:21.839 00:33:21.839 Latency(us) 00:33:21.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.839 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:21.839 nvme0n1 : 2.05 18049.19 70.50 0.00 0.00 6947.81 3640.89 46409.20 00:33:21.839 =================================================================================================================== 00:33:21.839 Total : 18049.19 70.50 0.00 0.00 6947.81 3640.89 46409.20 00:33:21.839 0 00:33:21.839 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:21.839 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:21.839 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:21.839 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:21.839 | select(.opcode=="crc32c") 00:33:21.839 | "\(.module_name) \(.executed)"' 00:33:21.839 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3682426 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3682426 ']' 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3682426 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3682426 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3682426' 00:33:22.099 killing process with pid 3682426 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3682426 00:33:22.099 Received shutdown signal, test time was about 2.000000 seconds 00:33:22.099 00:33:22.099 Latency(us) 00:33:22.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.099 =================================================================================================================== 00:33:22.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.099 19:06:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3682426 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3682833 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3682833 /var/tmp/bperf.sock 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3682833 ']' 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:22.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:22.357 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.357 [2024-07-25 19:06:34.197255] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:22.357 [2024-07-25 19:06:34.197332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682833 ] 00:33:22.357 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:22.357 Zero copy mechanism will not be used. 00:33:22.357 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.615 [2024-07-25 19:06:34.261429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.615 [2024-07-25 19:06:34.351576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.615 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:22.615 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:22.615 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:22.615 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:22.615 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:22.872 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.872 19:06:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.439 nvme0n1 00:33:23.439 19:06:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:23.439 19:06:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.697 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.697 Zero copy mechanism will not be used. 00:33:23.697 Running I/O for 2 seconds... 00:33:25.602 00:33:25.602 Latency(us) 00:33:25.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.602 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:25.602 nvme0n1 : 2.00 5037.29 629.66 0.00 0.00 3171.91 770.65 12136.30 00:33:25.602 =================================================================================================================== 00:33:25.602 Total : 5037.29 629.66 0.00 0.00 3171.91 770.65 12136.30 00:33:25.602 0 00:33:25.602 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:25.602 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:25.602 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:25.602 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:25.602 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:25.602 | select(.opcode=="crc32c") 00:33:25.602 | "\(.module_name) \(.executed)"' 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3682833 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3682833 ']' 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3682833 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3682833 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3682833' 00:33:25.860 killing process with pid 3682833 00:33:25.860 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3682833 00:33:25.860 Received shutdown signal, test time was about 2.000000 seconds 00:33:25.861 00:33:25.861 Latency(us) 00:33:25.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.861 =================================================================================================================== 00:33:25.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.861 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3682833 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3683310 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3683310 /var/tmp/bperf.sock 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3683310 ']' 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:26.119 19:06:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:26.119 [2024-07-25 19:06:37.914675] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:26.119 [2024-07-25 19:06:37.914787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683310 ] 00:33:26.119 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.119 [2024-07-25 19:06:37.976524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.376 [2024-07-25 19:06:38.063205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.376 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:26.376 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:26.376 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:26.377 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:26.377 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:26.634 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.634 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.892 nvme0n1 00:33:26.892 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:26.892 19:06:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:27.150 Running I/O for 2 seconds... 00:33:29.055 00:33:29.055 Latency(us) 00:33:29.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.055 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.055 nvme0n1 : 2.00 21797.31 85.15 0.00 0.00 5865.64 3094.76 13495.56 00:33:29.055 =================================================================================================================== 00:33:29.055 Total : 21797.31 85.15 0.00 0.00 5865.64 3094.76 13495.56 00:33:29.055 0 00:33:29.055 19:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:29.055 19:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:29.055 19:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:29.055 19:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:29.055 | select(.opcode=="crc32c") 00:33:29.055 | "\(.module_name) \(.executed)"' 00:33:29.055 19:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3683310 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3683310 ']' 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3683310 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3683310 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3683310' 00:33:29.314 killing process with pid 3683310 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3683310 00:33:29.314 Received shutdown signal, test time was about 2.000000 seconds 00:33:29.314 00:33:29.314 Latency(us) 00:33:29.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.314 =================================================================================================================== 00:33:29.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.314 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3683310 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3683768 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3683768 /var/tmp/bperf.sock 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3683768 ']' 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:29.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:29.573 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:29.573 [2024-07-25 19:06:41.442389] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:29.573 [2024-07-25 19:06:41.442465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683768 ] 00:33:29.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:29.573 Zero copy mechanism will not be used. 00:33:29.830 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.830 [2024-07-25 19:06:41.504559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.830 [2024-07-25 19:06:41.592579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.830 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:29.830 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:29.830 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:29.830 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:29.830 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:30.396 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.396 19:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.654 nvme0n1 00:33:30.654 19:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:30.654 19:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:30.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:30.654 Zero copy mechanism will not be used. 00:33:30.654 Running I/O for 2 seconds... 00:33:33.193 00:33:33.193 Latency(us) 00:33:33.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.193 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:33.193 nvme0n1 : 2.00 4941.11 617.64 0.00 0.00 3229.56 2305.90 8786.68 00:33:33.193 =================================================================================================================== 00:33:33.193 Total : 4941.11 617.64 0.00 0.00 3229.56 2305.90 8786.68 00:33:33.193 0 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:33.193 | select(.opcode=="crc32c") 00:33:33.193 | "\(.module_name) \(.executed)"' 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3683768 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3683768 ']' 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3683768 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:33.193 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:33.194 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3683768 00:33:33.194 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:33.194 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:33.194 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3683768' 00:33:33.194 killing process with pid 3683768 00:33:33.194 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3683768 00:33:33.194 Received shutdown signal, test time was about 2.000000 seconds 00:33:33.194 00:33:33.194 Latency(us) 00:33:33.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.194 =================================================================================================================== 00:33:33.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:33.194 19:06:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3683768 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3682407 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3682407 ']' 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3682407 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3682407 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3682407' 00:33:33.194 killing process with pid 3682407 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3682407 00:33:33.194 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3682407 00:33:33.452 00:33:33.452 real 0m15.125s 00:33:33.452 user 0m29.826s 00:33:33.452 sys 0m4.186s 00:33:33.452 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:33.452 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:33.452 ************************************ 00:33:33.452 END TEST nvmf_digest_clean 00:33:33.452 ************************************ 00:33:33.452 19:06:45 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:33.452 19:06:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:33.452 19:06:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:33.452 19:06:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:33.712 ************************************ 00:33:33.712 START TEST nvmf_digest_error 00:33:33.712 ************************************ 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3684203 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3684203 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3684203 ']' 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:33.712 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.712 [2024-07-25 19:06:45.391098] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:33.712 [2024-07-25 19:06:45.391208] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.712 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.712 [2024-07-25 19:06:45.461640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.712 [2024-07-25 19:06:45.549961] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.712 [2024-07-25 19:06:45.550024] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.712 [2024-07-25 19:06:45.550051] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.712 [2024-07-25 19:06:45.550074] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.712 [2024-07-25 19:06:45.550088] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.712 [2024-07-25 19:06:45.550119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.971 [2024-07-25 19:06:45.618709] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.971 null0 00:33:33.971 [2024-07-25 19:06:45.733227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.971 [2024-07-25 19:06:45.757475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3684228 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3684228 /var/tmp/bperf.sock 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3684228 ']' 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:33.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:33.971 19:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.971 [2024-07-25 19:06:45.803872] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:33.971 [2024-07-25 19:06:45.803936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684228 ] 00:33:33.971 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.230 [2024-07-25 19:06:45.865405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.230 [2024-07-25 19:06:45.955997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.230 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:34.230 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:34.230 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:34.230 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:34.488 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:34.488 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.488 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.746 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.746 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:34.746 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.005 nvme0n1 00:33:35.005 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:35.005 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.005 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.005 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.005 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:35.005 19:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:35.005 Running I/O for 2 seconds... 00:33:35.005 [2024-07-25 19:06:46.845841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.005 [2024-07-25 19:06:46.845892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.005 [2024-07-25 19:06:46.845924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.005 [2024-07-25 19:06:46.860882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.005 [2024-07-25 19:06:46.860919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.005 [2024-07-25 19:06:46.860953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.005 [2024-07-25 19:06:46.874760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.005 [2024-07-25 19:06:46.874796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.005 [2024-07-25 19:06:46.874826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.890438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.890476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.890500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.903853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.903887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.903907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.918213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.918245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.918263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.929817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.929852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.929872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.944309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.944341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.944375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.957453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.957488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.957511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.972372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.972408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.972428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.987475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.987520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.987539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.263 [2024-07-25 19:06:46.999551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.263 [2024-07-25 19:06:46.999586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.263 [2024-07-25 19:06:46.999605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.016427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.016471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.016491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.032197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.032233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.032251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.044656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.044690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.044709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.059845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.059880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.059906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.073846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.073886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.073910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.089803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.089840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.089878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.101247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.101280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.101303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.115936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.115970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.115990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.264 [2024-07-25 19:06:47.129149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.264 [2024-07-25 19:06:47.129193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.264 [2024-07-25 19:06:47.129214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.143435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.143479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.143498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.157342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.157392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.157417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.169183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.169212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.169228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.183146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.183177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.183195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.195798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.195833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.195852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.209741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.209776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.209796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.223812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.223856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.223876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.237309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.237340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.237358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.249382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.249417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.249437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.265865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.265900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.265919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.539 [2024-07-25 19:06:47.279828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.539 [2024-07-25 19:06:47.279862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.539 [2024-07-25 19:06:47.279882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.291205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.291233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.291250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.304884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.304919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.304939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.318731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.318765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.318785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.332542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.332577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.332598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.346057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.346101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.346135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.359602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.359636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.359655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.372472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.372507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.372527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.386019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.386053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.386082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.400165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.400197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.400214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.540 [2024-07-25 19:06:47.411895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.540 [2024-07-25 19:06:47.411930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.540 [2024-07-25 19:06:47.411949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.801 [2024-07-25 19:06:47.425928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.801 [2024-07-25 19:06:47.425969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.801 [2024-07-25 19:06:47.425990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.801 [2024-07-25 19:06:47.438901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.801 [2024-07-25 19:06:47.438935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.801 [2024-07-25 19:06:47.438954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.801 [2024-07-25 19:06:47.454188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.801 [2024-07-25 19:06:47.454225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.801 [2024-07-25 19:06:47.454255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.801 [2024-07-25 19:06:47.469906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.801 [2024-07-25 19:06:47.469941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.801 [2024-07-25 19:06:47.469961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.801 [2024-07-25 19:06:47.481801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.801 [2024-07-25 19:06:47.481851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.481869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.497300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.497333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.497350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.509339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.509382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.509397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.524513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.524548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.524567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.541080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.541124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.541141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.555199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.555230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.555258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.567848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.567883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.567903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.582434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.582477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.582497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.600829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.600864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.600884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.612423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.612458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.612477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.629206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.629235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.629252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.646844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.646886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.646907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.658631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.658666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.658685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.802 [2024-07-25 19:06:47.673868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:35.802 [2024-07-25 19:06:47.673903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.802 [2024-07-25 19:06:47.673923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.062 [2024-07-25 19:06:47.689956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.062 [2024-07-25 19:06:47.690000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.062 [2024-07-25 19:06:47.690022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.062 [2024-07-25 19:06:47.701650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.062 [2024-07-25 19:06:47.701684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.062 [2024-07-25 19:06:47.701712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.062 [2024-07-25 19:06:47.717465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.062 [2024-07-25 19:06:47.717503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.062 [2024-07-25 19:06:47.717523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.062 [2024-07-25 19:06:47.730527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.062 [2024-07-25 19:06:47.730563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.062 [2024-07-25 19:06:47.730582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.062 [2024-07-25 19:06:47.747395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.062 [2024-07-25 19:06:47.747431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.062 [2024-07-25 19:06:47.747450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.062 [2024-07-25 19:06:47.763152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.062 [2024-07-25 19:06:47.763180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.062 [2024-07-25 19:06:47.763197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.775015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.775049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.775079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.791992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.792028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.792047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.803854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.803889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.803908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.819385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.819420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.819440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.834904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.834947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.834968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.847445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.847479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.847499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.861700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.861735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.861754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.874306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.874335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.874351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.889722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.889759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.889779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.905217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.905249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.905267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.917525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.917560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.917580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.063 [2024-07-25 19:06:47.935043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.063 [2024-07-25 19:06:47.935099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.063 [2024-07-25 19:06:47.935143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:47.950608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:47.950644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:47.950663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:47.963266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:47.963297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:47.963315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:47.978521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:47.978556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:47.978576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:47.991340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:47.991385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:47.991405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.005932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.005967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:48.005988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.019806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.019842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:48.019862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.034131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.034163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:48.034181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.049220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.049248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:48.049265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.060861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.060895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:48.060915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.077487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.077523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:48.077557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.089054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.089113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:48.089130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.105843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.105879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.322 [2024-07-25 19:06:48.105898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.322 [2024-07-25 19:06:48.123457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.322 [2024-07-25 19:06:48.123493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.323 [2024-07-25 19:06:48.123512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.323 [2024-07-25 19:06:48.139246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.323 [2024-07-25 19:06:48.139279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.323 [2024-07-25 19:06:48.139297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.323 [2024-07-25 19:06:48.151556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.323 [2024-07-25 19:06:48.151592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.323 [2024-07-25 19:06:48.151612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.323 [2024-07-25 19:06:48.167455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.323 [2024-07-25 19:06:48.167490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.323 [2024-07-25 19:06:48.167510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.323 [2024-07-25 19:06:48.183747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.323 [2024-07-25 19:06:48.183782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.323 [2024-07-25 19:06:48.183802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.202790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.202827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.202846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.214020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.214070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.214107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.230512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.230547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.230567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.246442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.246477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.246497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.258914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.258949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.258968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.275596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.275632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.275651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.290799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.290835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.290854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.304731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.304766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.304786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.318454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.318489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.318509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.330769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.330804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.330829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.347536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.347571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.347589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.361330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.361375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.361395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.374606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.374641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.374660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.390132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.390162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.390179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.405174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.405205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.405223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.421233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.421265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.421282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.433226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.433257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.433274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.583 [2024-07-25 19:06:48.449641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.583 [2024-07-25 19:06:48.449674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.583 [2024-07-25 19:06:48.449698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.843 [2024-07-25 19:06:48.460949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.843 [2024-07-25 19:06:48.460986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.843 [2024-07-25 19:06:48.461005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.843 [2024-07-25 19:06:48.476179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.843 [2024-07-25 19:06:48.476209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.843 [2024-07-25 19:06:48.476225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.843 [2024-07-25 19:06:48.492015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.843 [2024-07-25 19:06:48.492046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.843 [2024-07-25 19:06:48.492072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.843 [2024-07-25 19:06:48.502750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.843 [2024-07-25 19:06:48.502778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.843 [2024-07-25 19:06:48.502794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.843 [2024-07-25 19:06:48.517332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.843 [2024-07-25 19:06:48.517364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.843 [2024-07-25 19:06:48.517382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.843 [2024-07-25 19:06:48.531967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.843 [2024-07-25 19:06:48.531997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.843 [2024-07-25 19:06:48.532014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.843 [2024-07-25 19:06:48.543249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.843 [2024-07-25 19:06:48.543279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.543296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.559973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.560003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.560020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.570857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.570889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.570922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.584696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.584726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.584742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.597819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.597864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.597880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.611346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.611383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.611401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.622405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.622436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.622463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.638669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.638697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.638713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.649196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.649227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.649244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.663917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.663950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.663968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.676945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.676976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.676994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.688240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.688270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.688297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.703458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.703489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.703506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.844 [2024-07-25 19:06:48.715725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:36.844 [2024-07-25 19:06:48.715757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.844 [2024-07-25 19:06:48.715795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.102 [2024-07-25 19:06:48.726700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:37.102 [2024-07-25 19:06:48.726729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.102 [2024-07-25 19:06:48.726745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.102 [2024-07-25 19:06:48.743166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:37.102 [2024-07-25 19:06:48.743197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.103 [2024-07-25 19:06:48.743215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.103 [2024-07-25 19:06:48.756255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:37.103 [2024-07-25 19:06:48.756286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.103 [2024-07-25 19:06:48.756304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.103 [2024-07-25 19:06:48.767423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:37.103 [2024-07-25 19:06:48.767455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.103 [2024-07-25 19:06:48.767473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.103 [2024-07-25 19:06:48.780215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:37.103 [2024-07-25 19:06:48.780246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.103 [2024-07-25 19:06:48.780263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.103 [2024-07-25 19:06:48.790615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:37.103 [2024-07-25 19:06:48.790643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.103 [2024-07-25 19:06:48.790659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.103 [2024-07-25 19:06:48.803462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:37.103 [2024-07-25 19:06:48.803499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.103 [2024-07-25 19:06:48.803516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.103 [2024-07-25 19:06:48.819794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a88d0) 00:33:37.103 [2024-07-25 19:06:48.819823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.103 [2024-07-25 19:06:48.819839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.103 00:33:37.103 Latency(us) 00:33:37.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.103 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:37.103 nvme0n1 : 2.00 18058.70 70.54 0.00 0.00 7080.38 2912.71 24078.41 00:33:37.103 =================================================================================================================== 00:33:37.103 Total : 18058.70 70.54 0.00 0.00 7080.38 2912.71 24078.41 00:33:37.103 0 00:33:37.103 19:06:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:37.103 19:06:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:37.103 19:06:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:37.103 19:06:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:37.103 | .driver_specific 00:33:37.103 | .nvme_error 00:33:37.103 | .status_code 00:33:37.103 | .command_transient_transport_error' 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3684228 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3684228 ']' 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3684228 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3684228 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3684228' 00:33:37.361 killing process with pid 3684228 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3684228 00:33:37.361 Received shutdown signal, test time was about 2.000000 seconds 00:33:37.361 00:33:37.361 Latency(us) 00:33:37.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.361 =================================================================================================================== 00:33:37.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.361 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3684228 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3684728 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3684728 /var/tmp/bperf.sock 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3684728 ']' 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:37.620 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.620 [2024-07-25 19:06:49.381723] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:37.620 [2024-07-25 19:06:49.381807] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684728 ] 00:33:37.620 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.620 Zero copy mechanism will not be used. 00:33:37.620 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.620 [2024-07-25 19:06:49.448789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.877 [2024-07-25 19:06:49.541811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.877 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.877 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:37.877 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:37.877 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.134 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:38.134 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.134 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.134 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.134 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.134 19:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.732 nvme0n1 00:33:38.732 19:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:38.732 19:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.732 19:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.732 19:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.732 19:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:38.732 19:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.732 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.732 Zero copy mechanism will not be used. 00:33:38.732 Running I/O for 2 seconds... 00:33:38.732 [2024-07-25 19:06:50.522196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.522250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.522270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.528631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.528668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.528688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.534653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.534690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.534710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.540991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.541028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.541048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.547028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.547073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.547111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.553108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.553141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.553159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.559200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.559232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.559250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.565186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.565218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.565241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.571383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.571420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.571440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.577534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.577570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.577590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.583610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.583646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.583665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.732 [2024-07-25 19:06:50.589732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.732 [2024-07-25 19:06:50.589768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.732 [2024-07-25 19:06:50.589788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.595757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.595793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.595812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.601790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.601825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.601845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.607877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.607912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.607932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.614010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.614046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.614079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.620307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.620346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.620380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.626558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.626593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.626613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.632632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.632668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.632687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.638657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.638691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.638711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.644797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.644833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.644854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.651075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.651123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.651141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.657120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.657154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.657172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.662978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.663010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.663028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.669087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.669136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.669154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.675199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.675230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.675247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.681364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.681412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.681431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.687451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.687486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.687505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.693426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.693461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.693481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.699543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.699578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.699598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.705584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.705619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.705639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.711658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.711694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.996 [2024-07-25 19:06:50.711713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.996 [2024-07-25 19:06:50.717646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.996 [2024-07-25 19:06:50.717681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.717700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.723625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.723664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.723685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.729624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.729659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.729678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.735862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.735898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.735917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.741857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.741891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.741911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.747850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.747885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.747905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.753828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.753863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.753882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.759761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.759797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.759817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.765731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.765767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.765786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.771720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.771755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.771774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.777630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.777663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.777683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.784429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.784479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.784499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.790481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.790516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.790536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.796610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.796646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.796666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.802563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.802598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.802618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.808542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.808577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.808596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.814636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.814671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.814691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.820619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.820654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.820673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.826592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.826628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.826653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.832706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.832742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.832762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.838704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.838739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.838759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.844755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.844790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.844810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.850714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.850749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.850769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.856666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.856701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.856721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.862651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.862687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.862706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.997 [2024-07-25 19:06:50.868659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:38.997 [2024-07-25 19:06:50.868694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.997 [2024-07-25 19:06:50.868713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.255 [2024-07-25 19:06:50.874653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.255 [2024-07-25 19:06:50.874689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.255 [2024-07-25 19:06:50.874708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.255 [2024-07-25 19:06:50.880715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.255 [2024-07-25 19:06:50.880760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.255 [2024-07-25 19:06:50.880780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.255 [2024-07-25 19:06:50.886694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.255 [2024-07-25 19:06:50.886729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.255 [2024-07-25 19:06:50.886748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.255 [2024-07-25 19:06:50.892711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.255 [2024-07-25 19:06:50.892746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.255 [2024-07-25 19:06:50.892766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.255 [2024-07-25 19:06:50.898661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.255 [2024-07-25 19:06:50.898695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.255 [2024-07-25 19:06:50.898714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.255 [2024-07-25 19:06:50.904683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.255 [2024-07-25 19:06:50.904719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.255 [2024-07-25 19:06:50.904739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.255 [2024-07-25 19:06:50.910674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.910709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.910729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.916872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.916907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.916927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.922896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.922930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.922949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.928846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.928881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.928900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.934758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.934792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.934812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.940779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.940814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.940833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.946739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.946773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.946792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.952740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.952773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.952793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.958783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.958818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.958837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.964952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.964986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.965006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.970894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.970929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.970948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.976917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.976950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.976969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.982884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.982924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.982944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.988869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.988903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.988921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:50.994913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:50.994947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:50.994965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.000902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.000937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.000956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.006892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.006926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.006945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.013034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.013074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.013096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.019025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.019066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.019087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.024983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.025018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.025037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.030922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.030956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.030975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.037756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.037791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.037811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.044337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.044384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.044401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.050272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.050304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.050322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.056010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.056046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.056073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.062180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.062211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.062229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.068386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.068421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.068441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.074333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.256 [2024-07-25 19:06:51.074380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.256 [2024-07-25 19:06:51.074399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.256 [2024-07-25 19:06:51.080461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.080495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.080516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.257 [2024-07-25 19:06:51.086612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.086648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.086672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.257 [2024-07-25 19:06:51.092663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.092698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.092717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.257 [2024-07-25 19:06:51.098804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.098838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.098857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.257 [2024-07-25 19:06:51.104990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.105024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.105043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.257 [2024-07-25 19:06:51.111209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.111240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.111257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.257 [2024-07-25 19:06:51.117272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.117302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.117318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.257 [2024-07-25 19:06:51.123305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.123351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.123368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.257 [2024-07-25 19:06:51.129483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.257 [2024-07-25 19:06:51.129518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.257 [2024-07-25 19:06:51.129538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.135536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.135571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.135590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.141593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.141632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.141652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.147804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.147839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.147859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.153898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.153933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.153952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.160014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.160048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.160078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.166195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.166226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.172235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.172279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.172296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.178349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.178398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.178417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.184464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.184499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.184517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.190483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.190516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.190535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.196548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.196582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.196601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.202831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.202866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.202885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.208850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.208885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.208904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.215168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.215199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.215217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.221349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.221380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.221397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.227390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.227425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.227444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.516 [2024-07-25 19:06:51.233343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.516 [2024-07-25 19:06:51.233390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.516 [2024-07-25 19:06:51.233411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.239339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.239385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.239404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.245277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.245307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.245329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.251364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.251410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.251430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.257317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.257362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.257382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.263461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.263495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.263514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.269619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.269653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.269673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.275917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.275951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.275971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.281956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.281990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.282009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.288733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.288769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.288789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.294746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.294781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.294800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.301077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.301124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.301141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.307195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.307239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.307256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.313374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.313409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.313428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.319403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.319436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.319456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.325418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.325452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.325471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.331438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.331472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.331492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.337507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.337542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.337562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.343637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.343671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.343691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.349574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.349608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.349634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.355868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.355903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.355922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.361865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.361900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.361931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.368004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.368038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.368057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.374364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.374422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.374442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.380497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.380531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.380551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.386528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.386562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.517 [2024-07-25 19:06:51.386581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.517 [2024-07-25 19:06:51.392851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.517 [2024-07-25 19:06:51.392885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.392908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.398867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.398901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.398926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.405041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.405119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.405140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.411443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.411479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.411510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.417686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.417720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.417740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.423670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.423704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.423725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.429822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.429857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.429876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.436198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.436230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.436249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.442332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.442380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.442399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.448680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.448715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.448735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.454930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.454964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.454985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.461266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.461310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.461334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.467343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.467374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.467415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.473545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.473579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.473598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.479791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.776 [2024-07-25 19:06:51.479825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.776 [2024-07-25 19:06:51.479844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.776 [2024-07-25 19:06:51.485928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.485962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.485982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.491910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.491944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.491963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.498159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.498191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.498223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.504215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.504245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.504261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.510434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.510468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.510492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.516501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.516546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.516566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.522621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.522655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.522681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.528779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.528814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.528835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.534893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.534927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.534946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.541929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.541964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.541984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.549360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.549415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.549441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.557973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.558009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.558031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.565281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.565313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.565331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.571279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.571315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.571334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.578695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.578731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.578750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.587049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.587117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.587135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.591976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.592018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.592038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.597935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.597971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.597991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.604937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.604972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.604991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.612748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.612784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.612804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.620390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.620426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.620446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.627852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.627887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.627906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.633809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.633843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.633862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.640031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.640082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.640102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.646240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.646285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.646302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.777 [2024-07-25 19:06:51.652417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:39.777 [2024-07-25 19:06:51.652451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.777 [2024-07-25 19:06:51.652470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.658532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.658567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.658593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.664747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.664781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.664807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.670801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.670835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.670858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.677273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.677321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.677346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.683439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.683474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.683505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.689561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.689595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.689622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.695682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.695716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.695735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.701801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.701835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.701854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.708007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.708041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.708068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.714537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.714572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.714591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.720689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.720724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.720744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.726796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.726830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.726849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.732750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.732783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.732802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.738931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.738965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.738984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.745211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.745246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.745264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.751147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.751179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.751197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.757216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.757261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.757281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.763291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.763335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.763353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.769367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.769401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.769428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.775362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.775411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.038 [2024-07-25 19:06:51.775430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.038 [2024-07-25 19:06:51.781352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.038 [2024-07-25 19:06:51.781400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.781420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.787293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.787338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.787367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.794134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.794166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.794183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.800228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.800278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.806327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.806358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.806378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.812285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.812330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.812350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.818359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.818388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.818420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.824489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.824524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.824543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.830491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.830526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.830550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.834042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.834099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.834125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.839623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.839663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.839683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.845764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.845798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.845817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.851982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.852018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.852037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.857892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.857927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.857947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.863867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.863902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.863921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.869836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.869870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.869889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.875855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.875890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.875909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.881790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.881824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.881843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.888222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.888253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.888271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.894268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.894299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.894315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.900393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.900428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.900447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.906489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.906523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.906543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.039 [2024-07-25 19:06:51.912404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.039 [2024-07-25 19:06:51.912439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.039 [2024-07-25 19:06:51.912458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.918559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.918595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.918615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.924313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.924360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.924377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.930307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.930351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.930369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.936300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.936330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.936346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.942480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.942514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.942540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.948462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.948496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.948515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.954628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.954664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.954683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.960890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.960926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.960945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.967399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.967434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.967453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.973413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.973447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.973466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.979458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.979493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.979512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.985480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.985515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.985534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.991434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.991469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.300 [2024-07-25 19:06:51.991488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.300 [2024-07-25 19:06:51.997447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.300 [2024-07-25 19:06:51.997487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:51.997507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.003331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.003380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.003400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.010708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.010743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.010763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.018858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.018893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.018914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.026782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.026818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.026837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.034937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.034972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.034992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.042731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.042768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.042788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.050916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.050952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.050972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.059230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.059261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.059277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.067399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.067435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.067455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.075461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.075498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.075517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.083542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.083577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.083597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.091412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.091460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.091480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.099481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.099518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.099538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.107512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.107548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.107567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.115043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.115081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.115099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.122586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.122618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.122636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.130114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.130161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.130183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.135550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.135583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.135600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.141520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.141556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.141575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.147764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.147799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.147818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.153782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.153817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.153837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.159771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.159807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.159826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.165850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.165885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.165904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.301 [2024-07-25 19:06:52.172358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.301 [2024-07-25 19:06:52.172405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.301 [2024-07-25 19:06:52.172426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.178581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.178617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.178638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.184773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.184808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.184828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.191008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.191044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.191071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.197174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.197205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.197222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.204537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.204574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.204594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.212707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.212743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.212763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.220114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.220147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.220164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.226718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.226755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.226775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.233632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.233668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.233688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.238034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.238079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.238107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.245708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.245746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.245765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.254050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.254108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.254128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.260713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.260749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.260769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.267808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.267845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.267865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.274083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.274133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.274150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.281037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.281090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.281127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.287267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.287301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.287335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.294237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.294271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.294289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.298228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.298269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.298295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.305218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.305264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.305280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.312428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.312464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.562 [2024-07-25 19:06:52.312484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.562 [2024-07-25 19:06:52.319396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.562 [2024-07-25 19:06:52.319432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.319451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.326367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.326416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.326436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.333343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.333393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.333414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.340363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.340413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.340432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.347208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.347240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.347258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.353899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.353935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.353954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.360707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.360743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.360762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.367885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.367920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.367941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.374936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.374973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.374992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.381761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.381797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.381817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.387684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.387719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.387739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.393813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.393849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.393868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.399896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.399933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.399952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.406051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.406094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.406130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.412303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.412350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.412377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.418523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.418559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.418579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.424779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.424813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.424832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.430711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.430745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.430765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.563 [2024-07-25 19:06:52.436745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.563 [2024-07-25 19:06:52.436779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.563 [2024-07-25 19:06:52.436798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.442654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.442689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.442708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.448715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.448750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.448770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.454823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.454858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.454878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.460915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.460951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.460970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.467026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.467074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.467096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.473087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.473133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.473150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.479249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.479279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.479310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.485425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.485460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.485480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.491450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.491485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.491504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.497570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.497605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.497624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.503909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.503943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.503963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.822 [2024-07-25 19:06:52.509914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x118a2c0) 00:33:40.822 [2024-07-25 19:06:52.509948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.822 [2024-07-25 19:06:52.509967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.822 00:33:40.822 Latency(us) 00:33:40.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.822 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:40.822 nvme0n1 : 2.00 4924.09 615.51 0.00 0.00 3244.84 807.06 11456.66 00:33:40.822 =================================================================================================================== 00:33:40.822 Total : 4924.09 615.51 0.00 0.00 3244.84 807.06 11456.66 00:33:40.822 0 00:33:40.822 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:40.822 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:40.822 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:40.822 | .driver_specific 00:33:40.822 | .nvme_error 00:33:40.822 | .status_code 00:33:40.822 | .command_transient_transport_error' 00:33:40.822 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 317 > 0 )) 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3684728 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3684728 ']' 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3684728 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3684728 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3684728' 00:33:41.081 killing process with pid 3684728 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3684728 00:33:41.081 Received shutdown signal, test time was about 2.000000 seconds 00:33:41.081 00:33:41.081 Latency(us) 00:33:41.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.081 =================================================================================================================== 00:33:41.081 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.081 19:06:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3684728 00:33:41.339 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:41.339 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:41.339 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:41.339 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:41.339 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:41.339 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3685161 00:33:41.339 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:41.340 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3685161 /var/tmp/bperf.sock 00:33:41.340 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3685161 ']' 00:33:41.340 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:41.340 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:41.340 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:41.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:41.340 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:41.340 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.340 [2024-07-25 19:06:53.102336] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:41.340 [2024-07-25 19:06:53.102438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685161 ] 00:33:41.340 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.340 [2024-07-25 19:06:53.163885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.597 [2024-07-25 19:06:53.254863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.597 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:41.597 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:41.597 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.597 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.854 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:41.854 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.854 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.854 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.854 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.854 19:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.422 nvme0n1 00:33:42.422 19:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:42.422 19:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.422 19:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.422 19:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.422 19:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:42.422 19:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.422 Running I/O for 2 seconds... 00:33:42.683 [2024-07-25 19:06:54.305146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ee5c8 00:33:42.683 [2024-07-25 19:06:54.305990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.306034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.317946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fb480 00:33:42.683 [2024-07-25 19:06:54.319068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.319113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.329275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e2c28 00:33:42.683 [2024-07-25 19:06:54.330458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.330486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.341616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e0630 00:33:42.683 [2024-07-25 19:06:54.342941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.342984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.354086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e7c50 00:33:42.683 [2024-07-25 19:06:54.355538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.355572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.366500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190feb58 00:33:42.683 [2024-07-25 19:06:54.367877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.367911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.379133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f0bc0 00:33:42.683 [2024-07-25 19:06:54.380753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.380781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.391738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e1f80 00:33:42.683 [2024-07-25 19:06:54.393667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.393695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.400371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190df550 00:33:42.683 [2024-07-25 19:06:54.401202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.401245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.412918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f1430 00:33:42.683 [2024-07-25 19:06:54.413874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.413902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.425673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc128 00:33:42.683 [2024-07-25 19:06:54.426758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.426791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.438164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fda78 00:33:42.683 [2024-07-25 19:06:54.439322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.439357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.450617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e5658 00:33:42.683 [2024-07-25 19:06:54.451953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.451994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.460832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f46d0 00:33:42.683 [2024-07-25 19:06:54.461438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.461482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.473399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e2c28 00:33:42.683 [2024-07-25 19:06:54.474196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.474230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.486089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f2510 00:33:42.683 [2024-07-25 19:06:54.487054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.487088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.497335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f5be8 00:33:42.683 [2024-07-25 19:06:54.498897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.498944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.507333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fe2e8 00:33:42.683 [2024-07-25 19:06:54.508154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.508181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.519981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e95a0 00:33:42.683 [2024-07-25 19:06:54.520969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.520997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.532553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e2c28 00:33:42.683 [2024-07-25 19:06:54.533686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.533733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.545049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ee5c8 00:33:42.683 [2024-07-25 19:06:54.546307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.546336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:42.683 [2024-07-25 19:06:54.557647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e7c50 00:33:42.683 [2024-07-25 19:06:54.559255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.683 [2024-07-25 19:06:54.559286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.570794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e95a0 00:33:42.942 [2024-07-25 19:06:54.572460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.572503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.583712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f0bc0 00:33:42.942 [2024-07-25 19:06:54.585539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.585568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.595474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e1f80 00:33:42.942 [2024-07-25 19:06:54.596884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.596911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.606778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f9b30 00:33:42.942 [2024-07-25 19:06:54.608187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.608214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.619575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ef270 00:33:42.942 [2024-07-25 19:06:54.621141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.621169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.632291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e5ec8 00:33:42.942 [2024-07-25 19:06:54.633993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.634026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.644843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e6fa8 00:33:42.942 [2024-07-25 19:06:54.646693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.646726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.657740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f0ff8 00:33:42.942 [2024-07-25 19:06:54.659800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.659829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.666435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f7da8 00:33:42.942 [2024-07-25 19:06:54.667287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.667315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.679253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e1b48 00:33:42.942 [2024-07-25 19:06:54.680279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.680323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.692078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ec408 00:33:42.942 [2024-07-25 19:06:54.693255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.942 [2024-07-25 19:06:54.693283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:42.942 [2024-07-25 19:06:54.704881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190eb328 00:33:42.942 [2024-07-25 19:06:54.706240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.706283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.719211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f2948 00:33:42.943 [2024-07-25 19:06:54.721279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.721306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.727601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fe720 00:33:42.943 [2024-07-25 19:06:54.728537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.728564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.739240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f9b30 00:33:42.943 [2024-07-25 19:06:54.740159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.740201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.751636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f92c0 00:33:42.943 [2024-07-25 19:06:54.752734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.752761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.764373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f4298 00:33:42.943 [2024-07-25 19:06:54.765644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.765671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.777172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f81e0 00:33:42.943 [2024-07-25 19:06:54.778575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.778609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.789753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f20d8 00:33:42.943 [2024-07-25 19:06:54.791314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.791342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.802270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f92c0 00:33:42.943 [2024-07-25 19:06:54.803860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.803887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:42.943 [2024-07-25 19:06:54.814497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f4b08 00:33:42.943 [2024-07-25 19:06:54.816455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.943 [2024-07-25 19:06:54.816483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:43.203 [2024-07-25 19:06:54.827490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e4140 00:33:43.203 [2024-07-25 19:06:54.829564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.203 [2024-07-25 19:06:54.829598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:43.203 [2024-07-25 19:06:54.835968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fe2e8 00:33:43.203 [2024-07-25 19:06:54.836858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.203 [2024-07-25 19:06:54.836890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:43.203 [2024-07-25 19:06:54.848269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e73e0 00:33:43.203 [2024-07-25 19:06:54.849201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.203 [2024-07-25 19:06:54.849233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:43.203 [2024-07-25 19:06:54.859844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e49b0 00:33:43.203 [2024-07-25 19:06:54.860784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.203 [2024-07-25 19:06:54.860817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:43.203 [2024-07-25 19:06:54.872973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f8e88 00:33:43.203 [2024-07-25 19:06:54.874098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.203 [2024-07-25 19:06:54.874130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:43.203 [2024-07-25 19:06:54.887014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fd208 00:33:43.204 [2024-07-25 19:06:54.888290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.888320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.899758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190eff18 00:33:43.204 [2024-07-25 19:06:54.900834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.900863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.911437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e3d08 00:33:43.204 [2024-07-25 19:06:54.913300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.913330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.921611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190eaef0 00:33:43.204 [2024-07-25 19:06:54.922562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.922595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.933903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f7da8 00:33:43.204 [2024-07-25 19:06:54.934951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.934978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.946461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ea680 00:33:43.204 [2024-07-25 19:06:54.947742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.947775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.959164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f7970 00:33:43.204 [2024-07-25 19:06:54.960611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.960645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.972069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e0a68 00:33:43.204 [2024-07-25 19:06:54.973709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.973742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.984339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e27f0 00:33:43.204 [2024-07-25 19:06:54.985919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.985946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:54.996902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e9e10 00:33:43.204 [2024-07-25 19:06:54.998818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:54.998851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:55.009468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e4578 00:33:43.204 [2024-07-25 19:06:55.011576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:55.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:55.018148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ec840 00:33:43.204 [2024-07-25 19:06:55.019077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:55.019124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:55.030879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e6738 00:33:43.204 [2024-07-25 19:06:55.032023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:55.032056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:55.042177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fb480 00:33:43.204 [2024-07-25 19:06:55.043255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:55.043297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:55.055029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ea680 00:33:43.204 [2024-07-25 19:06:55.056282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:55.056324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:43.204 [2024-07-25 19:06:55.067807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e6fa8 00:33:43.204 [2024-07-25 19:06:55.069303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.204 [2024-07-25 19:06:55.069346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.080865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e23b8 00:33:43.465 [2024-07-25 19:06:55.082446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.082481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.093631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fac10 00:33:43.465 [2024-07-25 19:06:55.095382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.095425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.106420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fdeb0 00:33:43.465 [2024-07-25 19:06:55.108343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.108384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.119151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190e88f8 00:33:43.465 [2024-07-25 19:06:55.121204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.121246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.127799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ed4e8 00:33:43.465 [2024-07-25 19:06:55.128764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.128791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.140182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f9b30 00:33:43.465 [2024-07-25 19:06:55.141104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.141147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.152787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190f0788 00:33:43.465 [2024-07-25 19:06:55.153927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.153954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.166586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190df550 00:33:43.465 [2024-07-25 19:06:55.168301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.168348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.179246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ee5c8 00:33:43.465 [2024-07-25 19:06:55.180861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.180904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.191569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190ec408 00:33:43.465 [2024-07-25 19:06:55.193687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.193729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.200297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190eaef0 00:33:43.465 [2024-07-25 19:06:55.201238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.201280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.213865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.214110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.214142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.226865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.227124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.227155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.240197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.240533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.240566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.253744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.253979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.254011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.266886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.267173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.267207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.279906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.280144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.280185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.293177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.293422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.293472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.306246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.306482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.306512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.319570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.319883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.319917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.465 [2024-07-25 19:06:55.332830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.465 [2024-07-25 19:06:55.333072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.465 [2024-07-25 19:06:55.333102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.346259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.346549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.346583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.359463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.359751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.359800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.372357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.372594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.372625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.385242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.385499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.385527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.398219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.398454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.398485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.411130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.411373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.411417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.424366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.424603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.424634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.437287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.437519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.437562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.450265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.450502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.450533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.463372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.463653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.463690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.476383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.476618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.476649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.489218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.489455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.489486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.502180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.502457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.502496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.515278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.515531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.515562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.528551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.528834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.528882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.541556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.541867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.541899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.554843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.555148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.555177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.567770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.568004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.568036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.581129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.581478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.581526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.726 [2024-07-25 19:06:55.594355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.726 [2024-07-25 19:06:55.594590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.726 [2024-07-25 19:06:55.594623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.607942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.608191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.608224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.621127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.621371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.621403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.634266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.634580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.634615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.647462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.647697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.647729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.660371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.660630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.660658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.673270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.673505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.673533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.686243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.686480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.686511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.699254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.699493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.699534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.712174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.712411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.712442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.725176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.725414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.725442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.738078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.738316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.738347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.750951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.751197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.751225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.763858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.764097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.764127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.776719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.776956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.776987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.789692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.789928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.789959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.802746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.803036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.803091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.816128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.816363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.816394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.829136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.829371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.829402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.842414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.842661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.842696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.987 [2024-07-25 19:06:55.855579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:43.987 [2024-07-25 19:06:55.855817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.987 [2024-07-25 19:06:55.855849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.246 [2024-07-25 19:06:55.869128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.246 [2024-07-25 19:06:55.869364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.869395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.882138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.882376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.882407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.895419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.895719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.895752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.908401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.908636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.908667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.921581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.921875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.921912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.934834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.935075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.935106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.947749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.947985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.948015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.960603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.960846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.960876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.973507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.973743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.973773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.986377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.986608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.986648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:55.999306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:55.999600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:55.999647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.012369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.012607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.012638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.025318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.025553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.025585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.038366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.038601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.038632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.051326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.051559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.051590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.064258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.064492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.064522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.077304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.077553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.077584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.090533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.090812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.090853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.103772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.104029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.104067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.247 [2024-07-25 19:06:56.117180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.247 [2024-07-25 19:06:56.117462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.247 [2024-07-25 19:06:56.117495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.130673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.130909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.130940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.143866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.144132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.144161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.157200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.157439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.157471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.170269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.170505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.170536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.183640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.183880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.183910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.196562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.196796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.196827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.209904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.210158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.210188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.223034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.223281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.223314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.235999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.236244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.236275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.248934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.249183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.249223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.261846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.262084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.262111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.274868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.275114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.275154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 [2024-07-25 19:06:56.287794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3910) with pdu=0x2000190fc560 00:33:44.506 [2024-07-25 19:06:56.288029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.506 [2024-07-25 19:06:56.288056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.506 00:33:44.506 Latency(us) 00:33:44.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.506 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:44.506 nvme0n1 : 2.01 20162.58 78.76 0.00 0.00 6333.93 2512.21 16311.18 00:33:44.506 =================================================================================================================== 00:33:44.506 Total : 20162.58 78.76 0.00 0.00 6333.93 2512.21 16311.18 00:33:44.506 0 00:33:44.506 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:44.506 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:44.506 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:44.506 | .driver_specific 00:33:44.506 | .nvme_error 00:33:44.506 | .status_code 00:33:44.506 | .command_transient_transport_error' 00:33:44.506 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3685161 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3685161 ']' 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3685161 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3685161 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3685161' 00:33:44.767 killing process with pid 3685161 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3685161 00:33:44.767 Received shutdown signal, test time was about 2.000000 seconds 00:33:44.767 00:33:44.767 Latency(us) 00:33:44.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.767 =================================================================================================================== 00:33:44.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.767 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3685161 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3685567 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3685567 /var/tmp/bperf.sock 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3685567 ']' 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:45.025 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:45.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:45.026 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:45.026 19:06:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.026 [2024-07-25 19:06:56.868537] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:45.026 [2024-07-25 19:06:56.868619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685567 ] 00:33:45.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.026 Zero copy mechanism will not be used. 00:33:45.026 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.284 [2024-07-25 19:06:56.930592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.284 [2024-07-25 19:06:57.018195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.284 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:45.284 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:45.284 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:45.284 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:45.542 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:45.542 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.542 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.542 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.542 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.542 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.109 nvme0n1 00:33:46.109 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:46.109 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.109 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.109 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.109 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:46.109 19:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:46.109 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:46.109 Zero copy mechanism will not be used. 00:33:46.109 Running I/O for 2 seconds... 00:33:46.109 [2024-07-25 19:06:57.894710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.895135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.895188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.901938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.902285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.902317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.908661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.909024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.909065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.914872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.915228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.915260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.921197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.921550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.921585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.927479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.927820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.927850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.933811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.934199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.934237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.941220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.941567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.941596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.947667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.947769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.947802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.953756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.954097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.954143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.960511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.960871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.960904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.966594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.966908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.966941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.972385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.972712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.972748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.979076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.979456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.979491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.109 [2024-07-25 19:06:57.985417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.109 [2024-07-25 19:06:57.985750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.109 [2024-07-25 19:06:57.985791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.368 [2024-07-25 19:06:57.991292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.368 [2024-07-25 19:06:57.991640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.368 [2024-07-25 19:06:57.991669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.368 [2024-07-25 19:06:57.997794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.368 [2024-07-25 19:06:57.998152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.368 [2024-07-25 19:06:57.998183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.368 [2024-07-25 19:06:58.004099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.368 [2024-07-25 19:06:58.004436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.368 [2024-07-25 19:06:58.004464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.368 [2024-07-25 19:06:58.010299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.368 [2024-07-25 19:06:58.010640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.368 [2024-07-25 19:06:58.010683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.368 [2024-07-25 19:06:58.016470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.368 [2024-07-25 19:06:58.016802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.368 [2024-07-25 19:06:58.016835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.368 [2024-07-25 19:06:58.022093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.368 [2024-07-25 19:06:58.022437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.368 [2024-07-25 19:06:58.022470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.368 [2024-07-25 19:06:58.027755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.028093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.028162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.033587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.033917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.033950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.040297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.040627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.040660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.047243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.047604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.047632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.053628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.053959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.053992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.059131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.059483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.059516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.064803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.065155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.065186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.070422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.070643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.070675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.076043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.076382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.076416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.082319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.082625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.082662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.087720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.088021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.088054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.093114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.093433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.093466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.098427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.098748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.098776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.103792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.104115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.104144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.109451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.109766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.109810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.115668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.115967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.116000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.122503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.122797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.122834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.128016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.128335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.128364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.133243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.133552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.133599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.138727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.139020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.139057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.144300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.144606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.144640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.149976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.150290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.150319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.155333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.155667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.155711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.161043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.161357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.161411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.167177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.167482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.167515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.172409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.172707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-25 19:06:58.172740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-25 19:06:58.177731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.369 [2024-07-25 19:06:58.178048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.178096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.183178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.183487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.183519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.189132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.189419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.189456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.195527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.195848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.195876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.202162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.202468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.202501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.207708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.208005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.208038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.213598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.213898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.213930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.218998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.219295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.219325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.224665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.224960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.224992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.230409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.230709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.230742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.236223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.236542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.236571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-25 19:06:58.241878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.370 [2024-07-25 19:06:58.242208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-25 19:06:58.242241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.247827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.248197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.248227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.254655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.255040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.255087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.260473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.260772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.260812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.266197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.266497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.266529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.271707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.271988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.272018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.277681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.278033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.278074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.284776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.285135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.285165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.291128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.291485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.291519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.298466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.298760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.298800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.629 [2024-07-25 19:06:58.305617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.629 [2024-07-25 19:06:58.305954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.629 [2024-07-25 19:06:58.305984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.313287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.313577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.313610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.319923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.320217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.320248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.326354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.326633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.326687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.332767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.333009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.333038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.338698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.338936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.338966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.344535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.344776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.344806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.350553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.350812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.350841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.356494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.356739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.356779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.362390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.362654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.362701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.368421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.368664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.368727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.374344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.374594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.374623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.380336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.380633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.380661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.385425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.385663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.385693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.390382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.390651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.390678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.395229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.395496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.395522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.400507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.400821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.400857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.405916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.406188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.406218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.411085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.411361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.411391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.417209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.417443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.417486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.422954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.423223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.423252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.428248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.428499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.428526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.433292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.433550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.433578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.438872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.439137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.439166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.444975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.445249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-25 19:06:58.445278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-25 19:06:58.450173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.630 [2024-07-25 19:06:58.450430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.450472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.455220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.455468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.455497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.460500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.460762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.460806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.465680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.465956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.465983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.471009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.471288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.471317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.476896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.477172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.477201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.483283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.483552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.483579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.488530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.488819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.488845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.493691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.493969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.494010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.498740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.499007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.499035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-25 19:06:58.504070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.631 [2024-07-25 19:06:58.504331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-25 19:06:58.504359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.509160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.509405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.509433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.515266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.515503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.515531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.520644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.520908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.520955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.526782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.527042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.527080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.532458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.532750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.532776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.537688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.537964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.537991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.542911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.543190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.543218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.548204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.548453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.548481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.553431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.553708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.553735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.558606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.558863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.558906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.564193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.564433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.564461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.569401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.569676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.569704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.574606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.574882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.574923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.580936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.581202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.581231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.587202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.587455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.587482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.593314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.593605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.593647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.599358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.599742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.599770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.606351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.606660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.606688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.613891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.614172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.614225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.621327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.621650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.621678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.629115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.629512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.629545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.637008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.637331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.637360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.644581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.644845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.644874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.652140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.652507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.652536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.659770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.660097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-25 19:06:58.660130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-25 19:06:58.667622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.892 [2024-07-25 19:06:58.667954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.667982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.675721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.676035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.676088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.683195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.683436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.683465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.690248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.690611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.690640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.697973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.698298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.698326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.705353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.705625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.705671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.711999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.712296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.712325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.718073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.718333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.718380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.724694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.724956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.724990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.731010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.731278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.731320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.737328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.737620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.737668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.743386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.743649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.743682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.748694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.748955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.748988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.753934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.754234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.754264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.759271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.759538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.759572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.893 [2024-07-25 19:06:58.764795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:46.893 [2024-07-25 19:06:58.765057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.893 [2024-07-25 19:06:58.765112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.769917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.770197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.770228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.775792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.776052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.776121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.780689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.780937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.780981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.786622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.786883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.786918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.792457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.792724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.792757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.798379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.798657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.798691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.804157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.804430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.804464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.809506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.809773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.814741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.815007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.815041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.819923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.820215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.820244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.825028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.825314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.825358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.830531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.830796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.155 [2024-07-25 19:06:58.830835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.155 [2024-07-25 19:06:58.836771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.155 [2024-07-25 19:06:58.837036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.837077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.842032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.842307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.842337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.847204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.847456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.847489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.852106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.852341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.852387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.856727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.856964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.856998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.861411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.861645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.861679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.866150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.866359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.866413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.870678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.870912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.870945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.876284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.876536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.876569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.881019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.881260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.881291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.885697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.885928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.885962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.890390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.890626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.890660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.895000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.895240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.895269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.899758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.899992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.900030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.904424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.904657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.904693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.909145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.909362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.909409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.913838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.914080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.914127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.919459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.919716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.919750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.924146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.924360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.924407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.928799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.929038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.929080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.933506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.933740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.933773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.938199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.938430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.938464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.942906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.943166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.943197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.947631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.947867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.947900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.952255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.952495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.952529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.956939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.957191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.957226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.961632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.961863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.156 [2024-07-25 19:06:58.961896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.156 [2024-07-25 19:06:58.966294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.156 [2024-07-25 19:06:58.966536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:58.966569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:58.970955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:58.971200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:58.971231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:58.975619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:58.975850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:58.975883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:58.980319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:58.980614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:58.980645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:58.985355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:58.985601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:58.985634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:58.990373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:58.990733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:58.990765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:58.996607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:58.996909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:58.996942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:59.003482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:59.003813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:59.003847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:59.010201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:59.010506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:59.010539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:59.017286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:59.017604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:59.017637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:59.024229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:59.024561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:59.024594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.157 [2024-07-25 19:06:59.030562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.157 [2024-07-25 19:06:59.030900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.157 [2024-07-25 19:06:59.030941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.418 [2024-07-25 19:06:59.037673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.038002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.038036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.043915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.044172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.044203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.048870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.049097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.049142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.053447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.053667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.053700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.057930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.058166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.058197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.062488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.062714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.062747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.067138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.067337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.067365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.071630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.071845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.071881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.076211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.076430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.076463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.080750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.080960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.080992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.085308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.085542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.085575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.089884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.090111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.090157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.094494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.094712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.094751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.099134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.099330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.099373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.103746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.103963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.103996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.108343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.108573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.108606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.112878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.113112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.113147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.117432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.117654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.117687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.122035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.122261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.122289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.126613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.126829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.126862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.131186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.131400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.131431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.135703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.135923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.135955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.140311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.140542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.140575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.144825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.145040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.145078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.149433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.149649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.149682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.154003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.154230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.419 [2024-07-25 19:06:59.154260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.419 [2024-07-25 19:06:59.158518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.419 [2024-07-25 19:06:59.158734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.158764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.163216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.163419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.163453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.167754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.167973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.168007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.172382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.172603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.172642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.176955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.177199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.177229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.181595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.181815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.181848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.186118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.186315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.186343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.190637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.190857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.190890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.195169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.195360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.195406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.199681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.199900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.199933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.204228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.204446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.204484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.208813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.209028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.209067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.213408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.213628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.213659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.218015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.218246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.218276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.222545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.222761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.222799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.227111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.227321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.227349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.231676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.231897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.231929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.236236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.236461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.236493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.240750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.240967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.240999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.245332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.245576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.245608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.249908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.250147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.250175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.254470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.254684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.254717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.258977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.259211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.259241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.263545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.263768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.263806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.268054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.268281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.268310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.272671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.272890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.272923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.277693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.277910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.420 [2024-07-25 19:06:59.277942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.420 [2024-07-25 19:06:59.282276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.420 [2024-07-25 19:06:59.282501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.421 [2024-07-25 19:06:59.282534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.421 [2024-07-25 19:06:59.286851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.421 [2024-07-25 19:06:59.287078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.421 [2024-07-25 19:06:59.287124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.421 [2024-07-25 19:06:59.291369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.421 [2024-07-25 19:06:59.291602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.421 [2024-07-25 19:06:59.291644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.295965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.296198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.296227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.301218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.301488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.301520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.307409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.307753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.307786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.313136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.313473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.313503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.318226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.318518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.318551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.322931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.323204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.323235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.327938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.328170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.328201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.332529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.332739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.332769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.336973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.337203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.337231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.341463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.341673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.341705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.345921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.346149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.346178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.350850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.351107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.351152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.356223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.356446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.356476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.360684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.360890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.360922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.365177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.365387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.365425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.369581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.369787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.369821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.374003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.374227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.374258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.378437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.378674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.378706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.382847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.383086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.383133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.387397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.387613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.387646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.391915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.392149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.392180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.396517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.396736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.396767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.401117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.401316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.401344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.682 [2024-07-25 19:06:59.405650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.682 [2024-07-25 19:06:59.405863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.682 [2024-07-25 19:06:59.405896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.410206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.410439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.410476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.414828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.415036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.415092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.419480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.419693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.419726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.424087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.424305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.424335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.428689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.428905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.428938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.433208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.433422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.433455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.437702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.437922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.437954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.442200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.442392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.442437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.446749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.446969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.447001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.451280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.451506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.451538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.455896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.456145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.456175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.460479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.460698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.460730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.465578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.465805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.465838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.470765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.470988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.471021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.475424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.475643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.475676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.480010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.480235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.480265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.484581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.484801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.484833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.489125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.489319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.489370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.493771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.493988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.494025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.498327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.498556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.498588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.502940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.503173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.503203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.507501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.507723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.507756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.512009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.512231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.512261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.516721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.516935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.516972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.521237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.521461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.521499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.525713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.683 [2024-07-25 19:06:59.525928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.683 [2024-07-25 19:06:59.525960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.683 [2024-07-25 19:06:59.530318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.684 [2024-07-25 19:06:59.530547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.684 [2024-07-25 19:06:59.530579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.684 [2024-07-25 19:06:59.534908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.684 [2024-07-25 19:06:59.535155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.684 [2024-07-25 19:06:59.535183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.684 [2024-07-25 19:06:59.539493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.684 [2024-07-25 19:06:59.539711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.684 [2024-07-25 19:06:59.539744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.684 [2024-07-25 19:06:59.544140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.684 [2024-07-25 19:06:59.544357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.684 [2024-07-25 19:06:59.544389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.684 [2024-07-25 19:06:59.548711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.684 [2024-07-25 19:06:59.548922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.684 [2024-07-25 19:06:59.548959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.684 [2024-07-25 19:06:59.553333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.684 [2024-07-25 19:06:59.553564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.684 [2024-07-25 19:06:59.553602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.558318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.558626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.558659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.564695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.565008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.565042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.571035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.571379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.571421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.577997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.578284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.578314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.584696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.585033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.585073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.591617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.591957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.591989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.598800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.599081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.599129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.605837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.606179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.606209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.612551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.612857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.612896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.619847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.620184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.620214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.626989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.627282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.627313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.633767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.634107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.634136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.640944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.641236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.641278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.648077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.648438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.648472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.655160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.655488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.655521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.661647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.661916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.661954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.668382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.668749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.668780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.675542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.675812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.675845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.682194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.682470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.682505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.687435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.687653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.687686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.692067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.692280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.692310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.696624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.696851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.696884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.701225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.946 [2024-07-25 19:06:59.701448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.946 [2024-07-25 19:06:59.701482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.946 [2024-07-25 19:06:59.705896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.706139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.706169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.710469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.710688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.710720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.715049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.715274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.715307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.719614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.719828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.719861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.724185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.724398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.724429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.728747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.728961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.728999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.733319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.733544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.733577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.737905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.738142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.738173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.742533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.742752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.742785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.747198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.747421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.747454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.751750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.751970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.752002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.756522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.756738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.756771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.761070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.761288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.761316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.765608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.765821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.765855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.770198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.770413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.770454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.774750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.774967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.775006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.779401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.779621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.779654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.783988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.784216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.784244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.788649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.788871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.788903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.793217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.793434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.793468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.797767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.797985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.798018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.802347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.802584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.802617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.806980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.807212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.807239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.811547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.811762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.811795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.816170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.816390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.816422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.947 [2024-07-25 19:06:59.820756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:47.947 [2024-07-25 19:06:59.820976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.947 [2024-07-25 19:06:59.821009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.825326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.825546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.825576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.829938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.830178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.830208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.834483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.834700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.834733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.839002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.839226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.839256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.843631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.843852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.843885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.848223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.848442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.848479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.852804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.853019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.853051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.857398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.857616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.857649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.861955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.862189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.862220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.866607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.866828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.866861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.871626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.871840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.871871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.876455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.876678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.876711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.882220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.882528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.882561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.207 [2024-07-25 19:06:59.888985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b3c50) with pdu=0x2000190fef90 00:33:48.207 [2024-07-25 19:06:59.889241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.207 [2024-07-25 19:06:59.889271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.207 00:33:48.207 Latency(us) 00:33:48.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.207 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:48.207 nvme0n1 : 2.00 5744.68 718.08 0.00 0.00 2776.41 2099.58 9029.40 00:33:48.207 =================================================================================================================== 00:33:48.207 Total : 5744.68 718.08 0.00 0.00 2776.41 2099.58 9029.40 00:33:48.207 0 00:33:48.207 19:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:48.207 19:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:48.207 19:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:48.207 | .driver_specific 00:33:48.207 | .nvme_error 00:33:48.207 | .status_code 00:33:48.207 | .command_transient_transport_error' 00:33:48.207 19:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 371 > 0 )) 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3685567 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3685567 ']' 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3685567 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3685567 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3685567' 00:33:48.467 killing process with pid 3685567 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3685567 00:33:48.467 Received shutdown signal, test time was about 2.000000 seconds 00:33:48.467 00:33:48.467 Latency(us) 00:33:48.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.467 =================================================================================================================== 00:33:48.467 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:48.467 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3685567 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3684203 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3684203 ']' 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3684203 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3684203 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3684203' 00:33:48.726 killing process with pid 3684203 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3684203 00:33:48.726 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3684203 00:33:48.984 00:33:48.984 real 0m15.351s 00:33:48.984 user 0m30.465s 00:33:48.984 sys 0m4.348s 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.984 ************************************ 00:33:48.984 END TEST nvmf_digest_error 00:33:48.984 ************************************ 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:48.984 rmmod nvme_tcp 00:33:48.984 rmmod nvme_fabrics 00:33:48.984 rmmod nvme_keyring 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3684203 ']' 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3684203 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3684203 ']' 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3684203 00:33:48.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3684203) - No such process 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3684203 is not found' 00:33:48.984 Process with pid 3684203 is not found 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:48.984 19:07:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.521 19:07:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:51.521 00:33:51.521 real 0m34.997s 00:33:51.521 user 1m1.168s 00:33:51.521 sys 0m10.150s 00:33:51.521 19:07:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:51.521 19:07:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:51.521 ************************************ 00:33:51.521 END TEST nvmf_digest 00:33:51.521 ************************************ 00:33:51.521 19:07:02 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:51.521 19:07:02 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:51.521 19:07:02 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:51.521 19:07:02 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:51.521 19:07:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:51.521 19:07:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:51.521 19:07:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.521 ************************************ 00:33:51.521 START TEST nvmf_bdevperf 00:33:51.521 ************************************ 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:51.521 * Looking for test storage... 00:33:51.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.521 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:51.522 19:07:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:53.424 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:53.424 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:53.424 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:53.425 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:53.425 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.425 19:07:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:53.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:33:53.425 00:33:53.425 --- 10.0.0.2 ping statistics --- 00:33:53.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.425 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:33:53.425 00:33:53.425 --- 10.0.0.1 ping statistics --- 00:33:53.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.425 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3687912 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3687912 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3687912 ']' 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.425 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.425 [2024-07-25 19:07:05.144405] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:53.425 [2024-07-25 19:07:05.144487] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.425 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.425 [2024-07-25 19:07:05.210864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:53.690 [2024-07-25 19:07:05.302862] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.690 [2024-07-25 19:07:05.302911] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.690 [2024-07-25 19:07:05.302932] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.690 [2024-07-25 19:07:05.302942] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.690 [2024-07-25 19:07:05.302951] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.690 [2024-07-25 19:07:05.303040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:53.690 [2024-07-25 19:07:05.303168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.690 [2024-07-25 19:07:05.303172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.690 [2024-07-25 19:07:05.446447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.690 Malloc0 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.690 [2024-07-25 19:07:05.510456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:53.690 { 00:33:53.690 "params": { 00:33:53.690 "name": "Nvme$subsystem", 00:33:53.690 "trtype": "$TEST_TRANSPORT", 00:33:53.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.690 "adrfam": "ipv4", 00:33:53.690 "trsvcid": "$NVMF_PORT", 00:33:53.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.690 "hdgst": ${hdgst:-false}, 00:33:53.690 "ddgst": ${ddgst:-false} 00:33:53.690 }, 00:33:53.690 "method": "bdev_nvme_attach_controller" 00:33:53.690 } 00:33:53.690 EOF 00:33:53.690 )") 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:53.690 19:07:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:53.690 "params": { 00:33:53.690 "name": "Nvme1", 00:33:53.690 "trtype": "tcp", 00:33:53.690 "traddr": "10.0.0.2", 00:33:53.690 "adrfam": "ipv4", 00:33:53.690 "trsvcid": "4420", 00:33:53.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.690 "hdgst": false, 00:33:53.690 "ddgst": false 00:33:53.690 }, 00:33:53.690 "method": "bdev_nvme_attach_controller" 00:33:53.690 }' 00:33:53.690 [2024-07-25 19:07:05.557571] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:53.690 [2024-07-25 19:07:05.557652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3688056 ] 00:33:53.995 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.995 [2024-07-25 19:07:05.619577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.995 [2024-07-25 19:07:05.710424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.259 Running I/O for 1 seconds... 00:33:55.192 00:33:55.192 Latency(us) 00:33:55.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:55.192 Verification LBA range: start 0x0 length 0x4000 00:33:55.192 Nvme1n1 : 1.01 9029.77 35.27 0.00 0.00 14083.07 1747.63 14757.74 00:33:55.192 =================================================================================================================== 00:33:55.192 Total : 9029.77 35.27 0.00 0.00 14083.07 1747.63 14757.74 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3688203 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:55.450 { 00:33:55.450 "params": { 00:33:55.450 "name": "Nvme$subsystem", 00:33:55.450 "trtype": "$TEST_TRANSPORT", 00:33:55.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.450 "adrfam": "ipv4", 00:33:55.450 "trsvcid": "$NVMF_PORT", 00:33:55.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.450 "hdgst": ${hdgst:-false}, 00:33:55.450 "ddgst": ${ddgst:-false} 00:33:55.450 }, 00:33:55.450 "method": "bdev_nvme_attach_controller" 00:33:55.450 } 00:33:55.450 EOF 00:33:55.450 )") 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:55.450 19:07:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:55.450 "params": { 00:33:55.450 "name": "Nvme1", 00:33:55.450 "trtype": "tcp", 00:33:55.450 "traddr": "10.0.0.2", 00:33:55.450 "adrfam": "ipv4", 00:33:55.450 "trsvcid": "4420", 00:33:55.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:55.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:55.450 "hdgst": false, 00:33:55.450 "ddgst": false 00:33:55.450 }, 00:33:55.450 "method": "bdev_nvme_attach_controller" 00:33:55.450 }' 00:33:55.450 [2024-07-25 19:07:07.311658] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:55.450 [2024-07-25 19:07:07.311747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3688203 ] 00:33:55.708 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.708 [2024-07-25 19:07:07.371700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.708 [2024-07-25 19:07:07.456282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.967 Running I/O for 15 seconds... 00:33:58.506 19:07:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3687912 00:33:58.506 19:07:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:58.506 [2024-07-25 19:07:10.285430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.506 [2024-07-25 19:07:10.285870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.506 [2024-07-25 19:07:10.285888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.285907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.285925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.285941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.285958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.285974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.285991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.286972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.286989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.287005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.287022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.287038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.287065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.287084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.287125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.287140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.287156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.287177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.287192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.287206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.287224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.287238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.507 [2024-07-25 19:07:10.287254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.507 [2024-07-25 19:07:10.287268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.508 [2024-07-25 19:07:10.287297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.508 [2024-07-25 19:07:10.287327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.508 [2024-07-25 19:07:10.287375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.508 [2024-07-25 19:07:10.287409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.508 [2024-07-25 19:07:10.287442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.287982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.287999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.508 [2024-07-25 19:07:10.288616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.508 [2024-07-25 19:07:10.288634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.288983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.288999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.509 [2024-07-25 19:07:10.289924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.289940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc72150 is same with the state(5) to be set 00:33:58.509 [2024-07-25 19:07:10.289960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.509 [2024-07-25 19:07:10.289973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.509 [2024-07-25 19:07:10.289986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52496 len:8 PRP1 0x0 PRP2 0x0 00:33:58.509 [2024-07-25 19:07:10.290002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.509 [2024-07-25 19:07:10.290078] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc72150 was disconnected and freed. reset controller. 00:33:58.509 [2024-07-25 19:07:10.293955] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.510 [2024-07-25 19:07:10.294032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.510 [2024-07-25 19:07:10.294741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.510 [2024-07-25 19:07:10.294772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.510 [2024-07-25 19:07:10.294789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.510 [2024-07-25 19:07:10.295042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.510 [2024-07-25 19:07:10.295299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.510 [2024-07-25 19:07:10.295324] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.510 [2024-07-25 19:07:10.295342] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.510 [2024-07-25 19:07:10.298915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.510 [2024-07-25 19:07:10.308197] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.510 [2024-07-25 19:07:10.308593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.510 [2024-07-25 19:07:10.308624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.510 [2024-07-25 19:07:10.308643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.510 [2024-07-25 19:07:10.308881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.510 [2024-07-25 19:07:10.309140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.510 [2024-07-25 19:07:10.309165] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.510 [2024-07-25 19:07:10.309181] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.510 [2024-07-25 19:07:10.312759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.510 [2024-07-25 19:07:10.322043] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.510 [2024-07-25 19:07:10.322500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.510 [2024-07-25 19:07:10.322526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.510 [2024-07-25 19:07:10.322557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.510 [2024-07-25 19:07:10.322805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.510 [2024-07-25 19:07:10.323050] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.510 [2024-07-25 19:07:10.323085] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.510 [2024-07-25 19:07:10.323102] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.510 [2024-07-25 19:07:10.326671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.510 [2024-07-25 19:07:10.335943] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.510 [2024-07-25 19:07:10.336372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.510 [2024-07-25 19:07:10.336403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.510 [2024-07-25 19:07:10.336421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.510 [2024-07-25 19:07:10.336659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.510 [2024-07-25 19:07:10.336903] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.510 [2024-07-25 19:07:10.336927] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.510 [2024-07-25 19:07:10.336942] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.510 [2024-07-25 19:07:10.340520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.510 [2024-07-25 19:07:10.349801] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.510 [2024-07-25 19:07:10.350228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.510 [2024-07-25 19:07:10.350259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.510 [2024-07-25 19:07:10.350282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.510 [2024-07-25 19:07:10.350521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.510 [2024-07-25 19:07:10.350765] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.510 [2024-07-25 19:07:10.350789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.510 [2024-07-25 19:07:10.350805] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.510 [2024-07-25 19:07:10.354393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.510 [2024-07-25 19:07:10.363668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.510 [2024-07-25 19:07:10.364049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.510 [2024-07-25 19:07:10.364086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.510 [2024-07-25 19:07:10.364105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.510 [2024-07-25 19:07:10.364344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.510 [2024-07-25 19:07:10.364587] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.510 [2024-07-25 19:07:10.364611] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.510 [2024-07-25 19:07:10.364627] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.510 [2024-07-25 19:07:10.368208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.510 [2024-07-25 19:07:10.377709] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.510 [2024-07-25 19:07:10.378076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.510 [2024-07-25 19:07:10.378108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.510 [2024-07-25 19:07:10.378126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.510 [2024-07-25 19:07:10.378364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.510 [2024-07-25 19:07:10.378607] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.510 [2024-07-25 19:07:10.378631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.510 [2024-07-25 19:07:10.378647] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.772 [2024-07-25 19:07:10.382231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.772 [2024-07-25 19:07:10.391719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.772 [2024-07-25 19:07:10.392123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.772 [2024-07-25 19:07:10.392155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.772 [2024-07-25 19:07:10.392173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.772 [2024-07-25 19:07:10.392412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.772 [2024-07-25 19:07:10.392656] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.772 [2024-07-25 19:07:10.392685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.772 [2024-07-25 19:07:10.392702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.772 [2024-07-25 19:07:10.396282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.772 [2024-07-25 19:07:10.405559] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.772 [2024-07-25 19:07:10.405939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.772 [2024-07-25 19:07:10.405970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.772 [2024-07-25 19:07:10.405988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.772 [2024-07-25 19:07:10.406239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.772 [2024-07-25 19:07:10.406483] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.772 [2024-07-25 19:07:10.406507] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.772 [2024-07-25 19:07:10.406523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.772 [2024-07-25 19:07:10.410099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.772 [2024-07-25 19:07:10.419582] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.772 [2024-07-25 19:07:10.419933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.772 [2024-07-25 19:07:10.419964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.772 [2024-07-25 19:07:10.419981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.772 [2024-07-25 19:07:10.420237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.772 [2024-07-25 19:07:10.420481] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.772 [2024-07-25 19:07:10.420505] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.772 [2024-07-25 19:07:10.420521] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.772 [2024-07-25 19:07:10.424097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.772 [2024-07-25 19:07:10.433578] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.772 [2024-07-25 19:07:10.433966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.772 [2024-07-25 19:07:10.433997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.772 [2024-07-25 19:07:10.434015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.772 [2024-07-25 19:07:10.434264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.772 [2024-07-25 19:07:10.434507] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.772 [2024-07-25 19:07:10.434531] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.772 [2024-07-25 19:07:10.434547] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.772 [2024-07-25 19:07:10.438119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.772 [2024-07-25 19:07:10.447598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.772 [2024-07-25 19:07:10.448006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.772 [2024-07-25 19:07:10.448037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.772 [2024-07-25 19:07:10.448054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.772 [2024-07-25 19:07:10.448305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.772 [2024-07-25 19:07:10.448549] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.772 [2024-07-25 19:07:10.448572] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.772 [2024-07-25 19:07:10.448588] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.772 [2024-07-25 19:07:10.452168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.772 [2024-07-25 19:07:10.461438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.772 [2024-07-25 19:07:10.461853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.772 [2024-07-25 19:07:10.461881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.772 [2024-07-25 19:07:10.461897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.772 [2024-07-25 19:07:10.462149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.772 [2024-07-25 19:07:10.462408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.772 [2024-07-25 19:07:10.462432] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.772 [2024-07-25 19:07:10.462448] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.772 [2024-07-25 19:07:10.466016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.772 [2024-07-25 19:07:10.475332] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.772 [2024-07-25 19:07:10.475747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.772 [2024-07-25 19:07:10.475778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.772 [2024-07-25 19:07:10.475796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.772 [2024-07-25 19:07:10.476035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.772 [2024-07-25 19:07:10.476288] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.772 [2024-07-25 19:07:10.476313] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.772 [2024-07-25 19:07:10.476329] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.772 [2024-07-25 19:07:10.479902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.772 [2024-07-25 19:07:10.489179] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.772 [2024-07-25 19:07:10.489580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.772 [2024-07-25 19:07:10.489611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.772 [2024-07-25 19:07:10.489629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.489873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.490130] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.490155] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.490171] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.493738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.503035] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.503451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.503482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.503500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.503739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.503982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.504006] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.504021] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.507600] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.516874] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.517291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.517323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.517340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.517579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.517822] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.517846] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.517862] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.521447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.530723] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.531124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.531156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.531174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.531412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.531656] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.531680] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.531701] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.535279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.544753] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.545171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.545204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.545223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.545462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.545727] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.545754] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.545770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.549528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.558818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.559206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.559236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.559253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.559485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.559753] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.559784] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.559808] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.563412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.572732] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.573137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.573170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.573188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.573428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.573671] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.573695] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.573711] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.577306] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.586584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.586997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.587029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.587047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.587297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.587541] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.587565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.587580] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.591159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.600441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.600819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.600850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.600868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.601117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.601361] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.601386] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.601401] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.604968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.614464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.614861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.614893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.773 [2024-07-25 19:07:10.614911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.773 [2024-07-25 19:07:10.615165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.773 [2024-07-25 19:07:10.615410] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.773 [2024-07-25 19:07:10.615434] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.773 [2024-07-25 19:07:10.615449] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.773 [2024-07-25 19:07:10.619014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.773 [2024-07-25 19:07:10.628299] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.773 [2024-07-25 19:07:10.628657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.773 [2024-07-25 19:07:10.628688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.774 [2024-07-25 19:07:10.628707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.774 [2024-07-25 19:07:10.628945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.774 [2024-07-25 19:07:10.629208] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.774 [2024-07-25 19:07:10.629234] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.774 [2024-07-25 19:07:10.629250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.774 [2024-07-25 19:07:10.632815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.774 [2024-07-25 19:07:10.642319] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.774 [2024-07-25 19:07:10.642742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.774 [2024-07-25 19:07:10.642773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:58.774 [2024-07-25 19:07:10.642791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:58.774 [2024-07-25 19:07:10.643030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:58.774 [2024-07-25 19:07:10.643286] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.774 [2024-07-25 19:07:10.643311] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.774 [2024-07-25 19:07:10.643327] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.774 [2024-07-25 19:07:10.646899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.656163] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.656586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.656618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.035 [2024-07-25 19:07:10.656635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.035 [2024-07-25 19:07:10.656874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.035 [2024-07-25 19:07:10.657133] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.035 [2024-07-25 19:07:10.657158] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.035 [2024-07-25 19:07:10.657174] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.035 [2024-07-25 19:07:10.660743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.670024] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.670437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.670468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.035 [2024-07-25 19:07:10.670485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.035 [2024-07-25 19:07:10.670724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.035 [2024-07-25 19:07:10.670967] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.035 [2024-07-25 19:07:10.670991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.035 [2024-07-25 19:07:10.671007] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.035 [2024-07-25 19:07:10.674590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.683863] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.684282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.684313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.035 [2024-07-25 19:07:10.684331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.035 [2024-07-25 19:07:10.684569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.035 [2024-07-25 19:07:10.684812] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.035 [2024-07-25 19:07:10.684836] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.035 [2024-07-25 19:07:10.684852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.035 [2024-07-25 19:07:10.688429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.697697] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.698125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.698152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.035 [2024-07-25 19:07:10.698169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.035 [2024-07-25 19:07:10.698410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.035 [2024-07-25 19:07:10.698677] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.035 [2024-07-25 19:07:10.698702] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.035 [2024-07-25 19:07:10.698718] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.035 [2024-07-25 19:07:10.702297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.711582] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.711992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.712023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.035 [2024-07-25 19:07:10.712041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.035 [2024-07-25 19:07:10.712291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.035 [2024-07-25 19:07:10.712534] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.035 [2024-07-25 19:07:10.712559] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.035 [2024-07-25 19:07:10.712574] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.035 [2024-07-25 19:07:10.716149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.725422] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.725820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.725851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.035 [2024-07-25 19:07:10.725874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.035 [2024-07-25 19:07:10.726126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.035 [2024-07-25 19:07:10.726370] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.035 [2024-07-25 19:07:10.726394] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.035 [2024-07-25 19:07:10.726410] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.035 [2024-07-25 19:07:10.729976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.739254] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.739634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.739665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.035 [2024-07-25 19:07:10.739683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.035 [2024-07-25 19:07:10.739922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.035 [2024-07-25 19:07:10.740178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.035 [2024-07-25 19:07:10.740203] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.035 [2024-07-25 19:07:10.740219] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.035 [2024-07-25 19:07:10.743783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.753262] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.753660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.753691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.035 [2024-07-25 19:07:10.753709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.035 [2024-07-25 19:07:10.753948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.035 [2024-07-25 19:07:10.754203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.035 [2024-07-25 19:07:10.754228] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.035 [2024-07-25 19:07:10.754244] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.035 [2024-07-25 19:07:10.757811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.035 [2024-07-25 19:07:10.767090] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.035 [2024-07-25 19:07:10.767490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-25 19:07:10.767520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.767538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.767777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.768020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.768049] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.768078] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.771646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.780917] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.781328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.781359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.781377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.781615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.781859] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.781882] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.781898] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.785472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.794752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.795133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.795164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.795182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.795440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.795705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.795731] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.795747] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.799506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.808686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.809089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.809123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.809141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.809381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.809625] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.809649] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.809665] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.813270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.822598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.823003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.823035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.823054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.823306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.823549] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.823574] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.823589] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.827172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.836460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.836878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.836906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.836922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.837184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.837427] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.837451] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.837467] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.841040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.850336] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.850807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.850838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.850856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.851106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.851350] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.851374] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.851390] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.854963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.864269] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.864652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.864683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.864701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.864945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.865202] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.865227] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.865243] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.868816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.878109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.878519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.878550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.878568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.878807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.879050] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.879086] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.879103] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.882672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.891954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.892388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.892419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.036 [2024-07-25 19:07:10.892437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.036 [2024-07-25 19:07:10.892676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.036 [2024-07-25 19:07:10.892920] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.036 [2024-07-25 19:07:10.892944] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.036 [2024-07-25 19:07:10.892959] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.036 [2024-07-25 19:07:10.896539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.036 [2024-07-25 19:07:10.905865] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.036 [2024-07-25 19:07:10.906273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-25 19:07:10.906305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.037 [2024-07-25 19:07:10.906323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.037 [2024-07-25 19:07:10.906561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.037 [2024-07-25 19:07:10.906805] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.037 [2024-07-25 19:07:10.906829] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.037 [2024-07-25 19:07:10.906850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.296 [2024-07-25 19:07:10.910439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.296 [2024-07-25 19:07:10.919733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.296 [2024-07-25 19:07:10.920164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.296 [2024-07-25 19:07:10.920196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.296 [2024-07-25 19:07:10.920214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.296 [2024-07-25 19:07:10.920452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.296 [2024-07-25 19:07:10.920695] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.296 [2024-07-25 19:07:10.920719] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.296 [2024-07-25 19:07:10.920735] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.296 [2024-07-25 19:07:10.924325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.296 [2024-07-25 19:07:10.933618] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.296 [2024-07-25 19:07:10.934044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.296 [2024-07-25 19:07:10.934082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.296 [2024-07-25 19:07:10.934101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.296 [2024-07-25 19:07:10.934341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.296 [2024-07-25 19:07:10.934584] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.296 [2024-07-25 19:07:10.934608] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.296 [2024-07-25 19:07:10.934624] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.296 [2024-07-25 19:07:10.938200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.296 [2024-07-25 19:07:10.947486] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.296 [2024-07-25 19:07:10.947861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.296 [2024-07-25 19:07:10.947892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.296 [2024-07-25 19:07:10.947909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.296 [2024-07-25 19:07:10.948158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.296 [2024-07-25 19:07:10.948401] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.296 [2024-07-25 19:07:10.948426] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.296 [2024-07-25 19:07:10.948441] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.296 [2024-07-25 19:07:10.952008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.296 [2024-07-25 19:07:10.961508] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.296 [2024-07-25 19:07:10.961910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.296 [2024-07-25 19:07:10.961941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.296 [2024-07-25 19:07:10.961959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.296 [2024-07-25 19:07:10.962206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:10.962450] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:10.962474] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:10.962489] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:10.966066] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:10.975549] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:10.975923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:10.975953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:10.975971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:10.976223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:10.976466] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:10.976490] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:10.976506] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:10.980087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:10.989386] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:10.989775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:10.989806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:10.989825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:10.990074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:10.990319] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:10.990343] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:10.990359] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:10.993930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:11.003423] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:11.003824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:11.003856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:11.003874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:11.004123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:11.004373] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:11.004397] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:11.004413] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:11.007977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:11.017462] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:11.017863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:11.017894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:11.017912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:11.018162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:11.018406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:11.018431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:11.018447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:11.022018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:11.031304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:11.031792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:11.031823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:11.031841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:11.032091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:11.032335] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:11.032359] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:11.032375] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:11.035947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:11.045238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:11.045726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:11.045778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:11.045797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:11.046053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:11.046314] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:11.046346] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:11.046365] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:11.050144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:11.059146] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:11.059609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:11.059641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:11.059660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:11.059899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:11.060160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:11.060186] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:11.060202] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:11.063777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:11.073100] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:11.073544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:11.073595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:11.073613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:11.073852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:11.074106] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:11.074131] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:11.074147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:11.077712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:11.086987] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:11.087420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:11.087451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:11.087469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:11.087708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:11.087952] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.297 [2024-07-25 19:07:11.087976] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.297 [2024-07-25 19:07:11.087991] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.297 [2024-07-25 19:07:11.091564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.297 [2024-07-25 19:07:11.100835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.297 [2024-07-25 19:07:11.101288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.297 [2024-07-25 19:07:11.101339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.297 [2024-07-25 19:07:11.101363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.297 [2024-07-25 19:07:11.101602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.297 [2024-07-25 19:07:11.101845] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.298 [2024-07-25 19:07:11.101869] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.298 [2024-07-25 19:07:11.101885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.298 [2024-07-25 19:07:11.105459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.298 [2024-07-25 19:07:11.114729] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.298 [2024-07-25 19:07:11.115156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.298 [2024-07-25 19:07:11.115188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.298 [2024-07-25 19:07:11.115205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.298 [2024-07-25 19:07:11.115444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.298 [2024-07-25 19:07:11.115687] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.298 [2024-07-25 19:07:11.115711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.298 [2024-07-25 19:07:11.115727] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.298 [2024-07-25 19:07:11.119301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.298 [2024-07-25 19:07:11.128563] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.298 [2024-07-25 19:07:11.128995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.298 [2024-07-25 19:07:11.129044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.298 [2024-07-25 19:07:11.129071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.298 [2024-07-25 19:07:11.129313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.298 [2024-07-25 19:07:11.129556] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.298 [2024-07-25 19:07:11.129580] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.298 [2024-07-25 19:07:11.129596] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.298 [2024-07-25 19:07:11.133165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.298 [2024-07-25 19:07:11.142426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.298 [2024-07-25 19:07:11.142824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.298 [2024-07-25 19:07:11.142855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.298 [2024-07-25 19:07:11.142873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.298 [2024-07-25 19:07:11.143122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.298 [2024-07-25 19:07:11.143366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.298 [2024-07-25 19:07:11.143396] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.298 [2024-07-25 19:07:11.143412] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.298 [2024-07-25 19:07:11.146975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.298 [2024-07-25 19:07:11.156454] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.298 [2024-07-25 19:07:11.156829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.298 [2024-07-25 19:07:11.156860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.298 [2024-07-25 19:07:11.156878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.298 [2024-07-25 19:07:11.157128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.298 [2024-07-25 19:07:11.157372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.298 [2024-07-25 19:07:11.157396] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.298 [2024-07-25 19:07:11.157412] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.298 [2024-07-25 19:07:11.160974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.298 [2024-07-25 19:07:11.170448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.298 [2024-07-25 19:07:11.170824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.298 [2024-07-25 19:07:11.170855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.298 [2024-07-25 19:07:11.170872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.298 [2024-07-25 19:07:11.171123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.298 [2024-07-25 19:07:11.171375] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.298 [2024-07-25 19:07:11.171399] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.298 [2024-07-25 19:07:11.171415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.174976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.184453] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.184852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.184882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.184900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.557 [2024-07-25 19:07:11.185150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.557 [2024-07-25 19:07:11.185394] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.557 [2024-07-25 19:07:11.185418] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.557 [2024-07-25 19:07:11.185433] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.188992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.198463] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.198938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.198968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.198986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.557 [2024-07-25 19:07:11.199236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.557 [2024-07-25 19:07:11.199480] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.557 [2024-07-25 19:07:11.199504] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.557 [2024-07-25 19:07:11.199520] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.203086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.212347] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.212727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.212758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.212776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.557 [2024-07-25 19:07:11.213015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.557 [2024-07-25 19:07:11.213269] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.557 [2024-07-25 19:07:11.213293] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.557 [2024-07-25 19:07:11.213309] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.216871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.226352] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.226849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.226901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.226918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.557 [2024-07-25 19:07:11.227168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.557 [2024-07-25 19:07:11.227412] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.557 [2024-07-25 19:07:11.227436] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.557 [2024-07-25 19:07:11.227452] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.231013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.240266] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.240639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.240670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.240688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.557 [2024-07-25 19:07:11.240932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.557 [2024-07-25 19:07:11.241189] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.557 [2024-07-25 19:07:11.241214] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.557 [2024-07-25 19:07:11.241229] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.244926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.254198] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.254584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.254615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.254633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.557 [2024-07-25 19:07:11.254872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.557 [2024-07-25 19:07:11.255126] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.557 [2024-07-25 19:07:11.255151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.557 [2024-07-25 19:07:11.255166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.258731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.268207] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.268594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.268625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.268643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.557 [2024-07-25 19:07:11.268881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.557 [2024-07-25 19:07:11.269145] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.557 [2024-07-25 19:07:11.269170] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.557 [2024-07-25 19:07:11.269185] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.272747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.282216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.282640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.282693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.282711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.557 [2024-07-25 19:07:11.282949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.557 [2024-07-25 19:07:11.283203] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.557 [2024-07-25 19:07:11.283229] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.557 [2024-07-25 19:07:11.283250] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.557 [2024-07-25 19:07:11.286811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.557 [2024-07-25 19:07:11.296080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.557 [2024-07-25 19:07:11.296472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.557 [2024-07-25 19:07:11.296504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.557 [2024-07-25 19:07:11.296522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.296772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.297033] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.297069] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.297091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.300828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.310184] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.310593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.310626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.310645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.310884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.311142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.311167] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.311183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.314764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.324077] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.324506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.324538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.324556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.324795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.325039] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.325072] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.325090] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.328654] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.337917] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.338362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.338417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.338435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.338673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.338917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.338941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.338957] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.342528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.351789] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.352267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.352298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.352315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.352553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.352797] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.352821] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.352837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.356413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.365673] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.366126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.366158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.366176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.366415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.366659] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.366682] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.366698] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.370286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.379565] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.380057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.380114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.380133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.380371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.380620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.380645] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.380660] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.384233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.393490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.393942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.393995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.394013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.394261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.394505] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.394529] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.394545] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.398126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.407384] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.407776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.407806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.407824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.408072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.408315] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.408339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.408355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.411917] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.558 [2024-07-25 19:07:11.421393] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.558 [2024-07-25 19:07:11.421769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.558 [2024-07-25 19:07:11.421800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.558 [2024-07-25 19:07:11.421817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.558 [2024-07-25 19:07:11.422056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.558 [2024-07-25 19:07:11.422309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.558 [2024-07-25 19:07:11.422333] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.558 [2024-07-25 19:07:11.422349] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.558 [2024-07-25 19:07:11.425917] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.818 [2024-07-25 19:07:11.435397] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.818 [2024-07-25 19:07:11.435803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.818 [2024-07-25 19:07:11.435834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.818 [2024-07-25 19:07:11.435852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.818 [2024-07-25 19:07:11.436102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.818 [2024-07-25 19:07:11.436346] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.818 [2024-07-25 19:07:11.436370] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.818 [2024-07-25 19:07:11.436386] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.818 [2024-07-25 19:07:11.439953] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.818 [2024-07-25 19:07:11.449431] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.818 [2024-07-25 19:07:11.449827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.818 [2024-07-25 19:07:11.449858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.818 [2024-07-25 19:07:11.449876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.818 [2024-07-25 19:07:11.450126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.818 [2024-07-25 19:07:11.450371] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.818 [2024-07-25 19:07:11.450395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.818 [2024-07-25 19:07:11.450411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.818 [2024-07-25 19:07:11.453973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.818 [2024-07-25 19:07:11.463456] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.818 [2024-07-25 19:07:11.463853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.818 [2024-07-25 19:07:11.463884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.818 [2024-07-25 19:07:11.463901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.818 [2024-07-25 19:07:11.464152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.818 [2024-07-25 19:07:11.464396] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.818 [2024-07-25 19:07:11.464420] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.818 [2024-07-25 19:07:11.464436] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.818 [2024-07-25 19:07:11.468001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.818 [2024-07-25 19:07:11.477475] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.818 [2024-07-25 19:07:11.477963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.818 [2024-07-25 19:07:11.478015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.818 [2024-07-25 19:07:11.478039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.818 [2024-07-25 19:07:11.478287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.818 [2024-07-25 19:07:11.478531] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.818 [2024-07-25 19:07:11.478555] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.818 [2024-07-25 19:07:11.478571] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.818 [2024-07-25 19:07:11.482141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.818 [2024-07-25 19:07:11.491393] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.818 [2024-07-25 19:07:11.491765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.818 [2024-07-25 19:07:11.491796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.818 [2024-07-25 19:07:11.491814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.818 [2024-07-25 19:07:11.492053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.818 [2024-07-25 19:07:11.492308] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.818 [2024-07-25 19:07:11.492332] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.818 [2024-07-25 19:07:11.492348] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.818 [2024-07-25 19:07:11.495909] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.818 [2024-07-25 19:07:11.505381] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.818 [2024-07-25 19:07:11.505839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.818 [2024-07-25 19:07:11.505870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.818 [2024-07-25 19:07:11.505888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.818 [2024-07-25 19:07:11.506138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.818 [2024-07-25 19:07:11.506382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.506405] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.506421] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.509982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.519245] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.519645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.519677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.519694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.519933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.520187] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.520220] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.520237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.523804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.533070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.533462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.533494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.533512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.533750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.533993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.534017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.534033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.537607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.547082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.547487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.547519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.547537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.547786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.548039] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.548083] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.548106] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.551840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.561007] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.561432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.561464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.561483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.561722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.561966] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.561990] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.562006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.565595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.574898] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.575293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.575325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.575343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.575582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.575825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.575849] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.575865] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.579443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.588916] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.589328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.589359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.589377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.589616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.589860] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.589884] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.589900] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.593473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.602937] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.603333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.603364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.603382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.603622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.603865] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.603889] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.603905] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.607478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.616952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.617363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.617395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.617413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.617657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.617900] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.617924] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.617940] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.621517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.630775] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.631135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.631166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.631184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.631432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.819 [2024-07-25 19:07:11.631675] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.819 [2024-07-25 19:07:11.631699] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.819 [2024-07-25 19:07:11.631714] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.819 [2024-07-25 19:07:11.635286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.819 [2024-07-25 19:07:11.644788] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.819 [2024-07-25 19:07:11.645173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.819 [2024-07-25 19:07:11.645204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.819 [2024-07-25 19:07:11.645222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.819 [2024-07-25 19:07:11.645462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.820 [2024-07-25 19:07:11.645705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.820 [2024-07-25 19:07:11.645729] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.820 [2024-07-25 19:07:11.645745] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.820 [2024-07-25 19:07:11.649325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.820 [2024-07-25 19:07:11.658794] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.820 [2024-07-25 19:07:11.659175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.820 [2024-07-25 19:07:11.659206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.820 [2024-07-25 19:07:11.659224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.820 [2024-07-25 19:07:11.659463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.820 [2024-07-25 19:07:11.659706] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.820 [2024-07-25 19:07:11.659730] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.820 [2024-07-25 19:07:11.659751] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.820 [2024-07-25 19:07:11.663332] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.820 [2024-07-25 19:07:11.672789] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.820 [2024-07-25 19:07:11.673196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.820 [2024-07-25 19:07:11.673227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.820 [2024-07-25 19:07:11.673245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.820 [2024-07-25 19:07:11.673483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.820 [2024-07-25 19:07:11.673726] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.820 [2024-07-25 19:07:11.673750] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.820 [2024-07-25 19:07:11.673766] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.820 [2024-07-25 19:07:11.677338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.820 [2024-07-25 19:07:11.686818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.820 [2024-07-25 19:07:11.687227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.820 [2024-07-25 19:07:11.687258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:33:59.820 [2024-07-25 19:07:11.687276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:33:59.820 [2024-07-25 19:07:11.687515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:33:59.820 [2024-07-25 19:07:11.687757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.820 [2024-07-25 19:07:11.687781] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.820 [2024-07-25 19:07:11.687797] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.820 [2024-07-25 19:07:11.691370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.079 [2024-07-25 19:07:11.700649] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.079 [2024-07-25 19:07:11.701049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.079 [2024-07-25 19:07:11.701086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.079 [2024-07-25 19:07:11.701105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.079 [2024-07-25 19:07:11.701343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.079 [2024-07-25 19:07:11.701587] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.079 [2024-07-25 19:07:11.701611] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.079 [2024-07-25 19:07:11.701627] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.079 [2024-07-25 19:07:11.705201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.079 [2024-07-25 19:07:11.714678] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.079 [2024-07-25 19:07:11.715094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.079 [2024-07-25 19:07:11.715125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.079 [2024-07-25 19:07:11.715143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.079 [2024-07-25 19:07:11.715381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.079 [2024-07-25 19:07:11.715625] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.079 [2024-07-25 19:07:11.715649] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.079 [2024-07-25 19:07:11.715665] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.079 [2024-07-25 19:07:11.719240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.079 [2024-07-25 19:07:11.728707] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.729117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.729149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.729166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.729405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.729648] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.729672] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.729688] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.733262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.742730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.743112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.743143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.743161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.743400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.743643] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.743666] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.743682] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.747260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.756718] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.757091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.757123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.757141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.757380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.757629] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.757653] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.757668] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.761245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.770719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.771095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.771126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.771144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.771383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.771626] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.771650] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.771665] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.775240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.784709] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.785105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.785136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.785154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.785393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.785636] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.785659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.785676] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.789247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.798701] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.799132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.799165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.799183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.799433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.799687] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.799717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.799741] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.803491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.812659] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.813042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.813083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.813103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.813342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.813586] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.813610] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.813626] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.817216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.826486] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.826886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.826917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.826935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.827187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.827431] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.827455] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.827471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.831034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.840503] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.840883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.840914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.840932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.841182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.841425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.841449] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.841465] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.845028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.854500] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.080 [2024-07-25 19:07:11.854873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.080 [2024-07-25 19:07:11.854904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.080 [2024-07-25 19:07:11.854927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.080 [2024-07-25 19:07:11.855178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.080 [2024-07-25 19:07:11.855421] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.080 [2024-07-25 19:07:11.855445] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.080 [2024-07-25 19:07:11.855460] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.080 [2024-07-25 19:07:11.859025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.080 [2024-07-25 19:07:11.868503] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.081 [2024-07-25 19:07:11.868902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.081 [2024-07-25 19:07:11.868933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.081 [2024-07-25 19:07:11.868951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.081 [2024-07-25 19:07:11.869201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.081 [2024-07-25 19:07:11.869445] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.081 [2024-07-25 19:07:11.869469] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.081 [2024-07-25 19:07:11.869484] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.081 [2024-07-25 19:07:11.873045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.081 [2024-07-25 19:07:11.882505] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.081 [2024-07-25 19:07:11.882917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.081 [2024-07-25 19:07:11.882948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.081 [2024-07-25 19:07:11.882965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.081 [2024-07-25 19:07:11.883214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.081 [2024-07-25 19:07:11.883458] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.081 [2024-07-25 19:07:11.883482] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.081 [2024-07-25 19:07:11.883498] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.081 [2024-07-25 19:07:11.887065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.081 [2024-07-25 19:07:11.896535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.081 [2024-07-25 19:07:11.896915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.081 [2024-07-25 19:07:11.896946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.081 [2024-07-25 19:07:11.896964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.081 [2024-07-25 19:07:11.897214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.081 [2024-07-25 19:07:11.897458] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.081 [2024-07-25 19:07:11.897488] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.081 [2024-07-25 19:07:11.897505] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.081 [2024-07-25 19:07:11.901083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.081 [2024-07-25 19:07:11.910551] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.081 [2024-07-25 19:07:11.910924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.081 [2024-07-25 19:07:11.910954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.081 [2024-07-25 19:07:11.910972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.081 [2024-07-25 19:07:11.911222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.081 [2024-07-25 19:07:11.911465] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.081 [2024-07-25 19:07:11.911489] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.081 [2024-07-25 19:07:11.911505] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.081 [2024-07-25 19:07:11.915074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.081 [2024-07-25 19:07:11.924547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.081 [2024-07-25 19:07:11.924926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.081 [2024-07-25 19:07:11.924956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.081 [2024-07-25 19:07:11.924974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.081 [2024-07-25 19:07:11.925225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.081 [2024-07-25 19:07:11.925469] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.081 [2024-07-25 19:07:11.925493] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.081 [2024-07-25 19:07:11.925508] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.081 [2024-07-25 19:07:11.929077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.081 [2024-07-25 19:07:11.938543] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.081 [2024-07-25 19:07:11.938926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.081 [2024-07-25 19:07:11.938956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.081 [2024-07-25 19:07:11.938974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.081 [2024-07-25 19:07:11.939223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.081 [2024-07-25 19:07:11.939467] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.081 [2024-07-25 19:07:11.939491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.081 [2024-07-25 19:07:11.939507] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.081 [2024-07-25 19:07:11.943076] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.081 [2024-07-25 19:07:11.952381] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.081 [2024-07-25 19:07:11.952794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.081 [2024-07-25 19:07:11.952825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.081 [2024-07-25 19:07:11.952843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.081 [2024-07-25 19:07:11.953091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.081 [2024-07-25 19:07:11.953335] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.081 [2024-07-25 19:07:11.953366] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.081 [2024-07-25 19:07:11.953381] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.342 [2024-07-25 19:07:11.956946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.342 [2024-07-25 19:07:11.966432] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.342 [2024-07-25 19:07:11.966808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.342 [2024-07-25 19:07:11.966839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.342 [2024-07-25 19:07:11.966856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.342 [2024-07-25 19:07:11.967105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.342 [2024-07-25 19:07:11.967349] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.342 [2024-07-25 19:07:11.967375] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.342 [2024-07-25 19:07:11.967390] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.342 [2024-07-25 19:07:11.970954] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.342 [2024-07-25 19:07:11.980471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.342 [2024-07-25 19:07:11.980855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.342 [2024-07-25 19:07:11.980886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.342 [2024-07-25 19:07:11.980905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.342 [2024-07-25 19:07:11.981154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.342 [2024-07-25 19:07:11.981398] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.342 [2024-07-25 19:07:11.981422] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.342 [2024-07-25 19:07:11.981438] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.342 [2024-07-25 19:07:11.985008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.342 [2024-07-25 19:07:11.994501] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.342 [2024-07-25 19:07:11.994888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.342 [2024-07-25 19:07:11.994920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.342 [2024-07-25 19:07:11.994938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.342 [2024-07-25 19:07:11.995193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.342 [2024-07-25 19:07:11.995437] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.342 [2024-07-25 19:07:11.995461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.342 [2024-07-25 19:07:11.995477] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.342 [2024-07-25 19:07:11.999041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.342 [2024-07-25 19:07:12.008542] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.342 [2024-07-25 19:07:12.008934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.342 [2024-07-25 19:07:12.008965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.342 [2024-07-25 19:07:12.008982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.342 [2024-07-25 19:07:12.009230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.342 [2024-07-25 19:07:12.009474] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.342 [2024-07-25 19:07:12.009498] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.342 [2024-07-25 19:07:12.009514] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.342 [2024-07-25 19:07:12.013090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.342 [2024-07-25 19:07:12.022386] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.342 [2024-07-25 19:07:12.022829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.342 [2024-07-25 19:07:12.022878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.342 [2024-07-25 19:07:12.022897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.342 [2024-07-25 19:07:12.023146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.342 [2024-07-25 19:07:12.023390] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.342 [2024-07-25 19:07:12.023414] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.342 [2024-07-25 19:07:12.023429] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.342 [2024-07-25 19:07:12.026999] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.342 [2024-07-25 19:07:12.036306] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.342 [2024-07-25 19:07:12.036787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.342 [2024-07-25 19:07:12.036817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.342 [2024-07-25 19:07:12.036835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.342 [2024-07-25 19:07:12.037083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.342 [2024-07-25 19:07:12.037326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.342 [2024-07-25 19:07:12.037361] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.342 [2024-07-25 19:07:12.037382] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.342 [2024-07-25 19:07:12.040950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.342 [2024-07-25 19:07:12.050223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.342 [2024-07-25 19:07:12.050630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.342 [2024-07-25 19:07:12.050662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.342 [2024-07-25 19:07:12.050680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.342 [2024-07-25 19:07:12.050931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.342 [2024-07-25 19:07:12.051205] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.342 [2024-07-25 19:07:12.051231] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.342 [2024-07-25 19:07:12.051247] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.342 [2024-07-25 19:07:12.054992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.342 [2024-07-25 19:07:12.064182] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.342 [2024-07-25 19:07:12.064628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.342 [2024-07-25 19:07:12.064659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.064679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.064918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.065176] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.065202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.065217] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.068805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.078114] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.078522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.078554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.078572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.078811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.079054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.079090] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.079107] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.082672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.091937] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.092363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.092395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.092413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.092652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.092895] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.092919] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.092936] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.096507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.105769] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.106163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.106199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.106234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.106474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.106718] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.106742] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.106758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.110328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.119803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.120168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.120198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.120216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.120455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.120698] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.120722] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.120738] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.124329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.133842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.134251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.134282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.134300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.134539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.134788] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.134812] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.134828] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.138408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.147721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.148108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.148140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.148158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.148397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.148641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.148665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.148681] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.152257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.161730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.162131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.162163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.162181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.162420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.162663] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.162688] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.162704] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.166274] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.175735] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.176113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.176144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.176163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.176402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.176645] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.176669] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.176685] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.180267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.189754] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.190154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.190185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.190203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.343 [2024-07-25 19:07:12.190442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.343 [2024-07-25 19:07:12.190685] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.343 [2024-07-25 19:07:12.190709] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.343 [2024-07-25 19:07:12.190724] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.343 [2024-07-25 19:07:12.194307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.343 [2024-07-25 19:07:12.203606] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.343 [2024-07-25 19:07:12.203987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.343 [2024-07-25 19:07:12.204019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.343 [2024-07-25 19:07:12.204037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.344 [2024-07-25 19:07:12.204285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.344 [2024-07-25 19:07:12.204528] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.344 [2024-07-25 19:07:12.204553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.344 [2024-07-25 19:07:12.204569] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.344 [2024-07-25 19:07:12.208141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.344 [2024-07-25 19:07:12.217448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.344 [2024-07-25 19:07:12.217846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.344 [2024-07-25 19:07:12.217878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.344 [2024-07-25 19:07:12.217896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.344 [2024-07-25 19:07:12.218146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.605 [2024-07-25 19:07:12.218390] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.605 [2024-07-25 19:07:12.218416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.605 [2024-07-25 19:07:12.218432] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.605 [2024-07-25 19:07:12.222007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.605 [2024-07-25 19:07:12.231295] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.605 [2024-07-25 19:07:12.231707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.605 [2024-07-25 19:07:12.231738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.605 [2024-07-25 19:07:12.231765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.605 [2024-07-25 19:07:12.232005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.605 [2024-07-25 19:07:12.232262] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.605 [2024-07-25 19:07:12.232287] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.605 [2024-07-25 19:07:12.232303] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.605 [2024-07-25 19:07:12.235869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.605 [2024-07-25 19:07:12.245135] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.605 [2024-07-25 19:07:12.245532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.605 [2024-07-25 19:07:12.245563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.605 [2024-07-25 19:07:12.245581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.605 [2024-07-25 19:07:12.245820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.605 [2024-07-25 19:07:12.246074] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.605 [2024-07-25 19:07:12.246099] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.605 [2024-07-25 19:07:12.246115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.605 [2024-07-25 19:07:12.249677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.605 [2024-07-25 19:07:12.259164] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.605 [2024-07-25 19:07:12.259591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.605 [2024-07-25 19:07:12.259622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.605 [2024-07-25 19:07:12.259640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.605 [2024-07-25 19:07:12.259879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.605 [2024-07-25 19:07:12.260135] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.605 [2024-07-25 19:07:12.260160] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.605 [2024-07-25 19:07:12.260175] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.605 [2024-07-25 19:07:12.263743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.605 [2024-07-25 19:07:12.273089] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.605 [2024-07-25 19:07:12.273494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.605 [2024-07-25 19:07:12.273525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.605 [2024-07-25 19:07:12.273543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.605 [2024-07-25 19:07:12.273793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.274036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.274078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.274096] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.277665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.286933] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.287340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.287371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.287389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.287628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.287871] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.287895] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.287910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.291486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.300954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.301438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.301489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.301507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.301756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.302010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.302043] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.302074] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.305810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.314985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.315410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.315463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.315482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.315720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.315964] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.315988] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.316004] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.319599] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.328976] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.329492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.329544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.329562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.329801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.330044] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.330080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.330098] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.333679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.342944] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.343328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.343359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.343377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.343616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.343860] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.343884] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.343900] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.347478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.356953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.357417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.357469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.357487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.357725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.357968] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.357992] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.358008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.361586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.370855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.371242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.371273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.371291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.371536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.371780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.371804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.371819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.375403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.384881] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.385265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.385296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.385314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.385553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.385796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.385820] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.385836] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.389413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.398907] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.399318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.399349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.399367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.399605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.399849] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.606 [2024-07-25 19:07:12.399873] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.606 [2024-07-25 19:07:12.399889] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.606 [2024-07-25 19:07:12.403471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.606 [2024-07-25 19:07:12.412741] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.606 [2024-07-25 19:07:12.413156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.606 [2024-07-25 19:07:12.413188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.606 [2024-07-25 19:07:12.413206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.606 [2024-07-25 19:07:12.413444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.606 [2024-07-25 19:07:12.413687] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.607 [2024-07-25 19:07:12.413711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.607 [2024-07-25 19:07:12.413732] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.607 [2024-07-25 19:07:12.417317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.607 [2024-07-25 19:07:12.426589] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.607 [2024-07-25 19:07:12.426998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.607 [2024-07-25 19:07:12.427029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.607 [2024-07-25 19:07:12.427047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.607 [2024-07-25 19:07:12.427297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.607 [2024-07-25 19:07:12.427540] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.607 [2024-07-25 19:07:12.427564] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.607 [2024-07-25 19:07:12.427580] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.607 [2024-07-25 19:07:12.431154] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.607 [2024-07-25 19:07:12.440424] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.607 [2024-07-25 19:07:12.440824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.607 [2024-07-25 19:07:12.440855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.607 [2024-07-25 19:07:12.440873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.607 [2024-07-25 19:07:12.441124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.607 [2024-07-25 19:07:12.441368] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.607 [2024-07-25 19:07:12.441392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.607 [2024-07-25 19:07:12.441408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.607 [2024-07-25 19:07:12.444971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.607 [2024-07-25 19:07:12.454448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.607 [2024-07-25 19:07:12.454847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.607 [2024-07-25 19:07:12.454878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.607 [2024-07-25 19:07:12.454895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.607 [2024-07-25 19:07:12.455146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.607 [2024-07-25 19:07:12.455391] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.607 [2024-07-25 19:07:12.455415] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.607 [2024-07-25 19:07:12.455431] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.607 [2024-07-25 19:07:12.458995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.607 [2024-07-25 19:07:12.468464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.607 [2024-07-25 19:07:12.468861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.607 [2024-07-25 19:07:12.468892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.607 [2024-07-25 19:07:12.468911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.607 [2024-07-25 19:07:12.469161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.607 [2024-07-25 19:07:12.469405] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.607 [2024-07-25 19:07:12.469429] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.607 [2024-07-25 19:07:12.469445] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.607 [2024-07-25 19:07:12.473011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.482508] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.482891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.482922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.482940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.483191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.483434] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.483458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.483473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.487045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.496537] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.496910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.496941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.496959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.497209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.497453] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.497477] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.497493] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.501063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.510533] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.510941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.510973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.510991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.511247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.511491] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.511515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.511531] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.515104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.524373] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.524770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.524801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.524818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.525057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.525312] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.525336] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.525352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.528919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.538399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.538775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.538806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.538824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.539074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.539318] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.539342] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.539358] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.542925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.552406] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.552924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.552979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.552997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.553258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.553514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.553543] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.553567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.557312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.566276] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.566784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.566837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.566855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.567113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.567358] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.567382] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.567398] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.570976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.580254] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.580746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.580776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.580794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.581033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.581287] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.581312] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.581328] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.584892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.594180] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.594557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.594588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.594605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.594844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.595100] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.869 [2024-07-25 19:07:12.595125] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.869 [2024-07-25 19:07:12.595141] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.869 [2024-07-25 19:07:12.598708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.869 [2024-07-25 19:07:12.608194] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.869 [2024-07-25 19:07:12.608605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.869 [2024-07-25 19:07:12.608641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.869 [2024-07-25 19:07:12.608660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.869 [2024-07-25 19:07:12.608899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.869 [2024-07-25 19:07:12.609154] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.609179] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.609195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.612758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.622036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.622425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.622457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.622476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.622715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.622959] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.622983] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.622999] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.626580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.636072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.636471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.636503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.636521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.636759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.637003] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.637027] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.637043] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.640619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.650117] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.650473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.650505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.650522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.650761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.651010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.651034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.651050] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.654633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.664114] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.664521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.664552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.664570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.664809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.665052] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.665090] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.665106] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.668675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.677947] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.678385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.678416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.678434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.678674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.678917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.678941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.678957] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.682530] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.691795] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.692176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.692207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.692224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.692463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.692706] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.692730] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.692746] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.696324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.705805] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.706192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.706222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.706240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.706478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.706721] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.706745] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.706761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.710335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.719812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.720203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.720233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.720251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.720490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.720733] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.720757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.720773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.724356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.870 [2024-07-25 19:07:12.733832] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.870 [2024-07-25 19:07:12.734240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.870 [2024-07-25 19:07:12.734272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:00.870 [2024-07-25 19:07:12.734290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:00.870 [2024-07-25 19:07:12.734529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:00.870 [2024-07-25 19:07:12.734772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.870 [2024-07-25 19:07:12.734796] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.870 [2024-07-25 19:07:12.734812] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.870 [2024-07-25 19:07:12.738388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.132 [2024-07-25 19:07:12.747667] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.132 [2024-07-25 19:07:12.748044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.132 [2024-07-25 19:07:12.748081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.132 [2024-07-25 19:07:12.748105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.132 [2024-07-25 19:07:12.748345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.132 [2024-07-25 19:07:12.748588] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.132 [2024-07-25 19:07:12.748612] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.132 [2024-07-25 19:07:12.748628] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.132 [2024-07-25 19:07:12.752211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.132 [2024-07-25 19:07:12.761694] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.132 [2024-07-25 19:07:12.762073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.132 [2024-07-25 19:07:12.762103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.132 [2024-07-25 19:07:12.762121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.132 [2024-07-25 19:07:12.762360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.132 [2024-07-25 19:07:12.762603] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.132 [2024-07-25 19:07:12.762627] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.132 [2024-07-25 19:07:12.762643] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.132 [2024-07-25 19:07:12.766220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.132 [2024-07-25 19:07:12.775698] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.132 [2024-07-25 19:07:12.776095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.132 [2024-07-25 19:07:12.776126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.132 [2024-07-25 19:07:12.776144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.132 [2024-07-25 19:07:12.776383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.132 [2024-07-25 19:07:12.776626] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.132 [2024-07-25 19:07:12.776650] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.132 [2024-07-25 19:07:12.776666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.132 [2024-07-25 19:07:12.780268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.132 [2024-07-25 19:07:12.789548] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.132 [2024-07-25 19:07:12.790009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.132 [2024-07-25 19:07:12.790067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.132 [2024-07-25 19:07:12.790088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.132 [2024-07-25 19:07:12.790327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.132 [2024-07-25 19:07:12.790570] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.132 [2024-07-25 19:07:12.790594] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.132 [2024-07-25 19:07:12.790615] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.132 [2024-07-25 19:07:12.794191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.132 [2024-07-25 19:07:12.803458] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.132 [2024-07-25 19:07:12.803885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.132 [2024-07-25 19:07:12.803918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.132 [2024-07-25 19:07:12.803936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.132 [2024-07-25 19:07:12.804201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.132 [2024-07-25 19:07:12.804457] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.132 [2024-07-25 19:07:12.804488] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.132 [2024-07-25 19:07:12.804510] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.132 [2024-07-25 19:07:12.808251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.132 [2024-07-25 19:07:12.817429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.132 [2024-07-25 19:07:12.817813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.132 [2024-07-25 19:07:12.817845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.132 [2024-07-25 19:07:12.817863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.132 [2024-07-25 19:07:12.818120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.132 [2024-07-25 19:07:12.818364] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.132 [2024-07-25 19:07:12.818388] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.132 [2024-07-25 19:07:12.818404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.132 [2024-07-25 19:07:12.821987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.132 [2024-07-25 19:07:12.831281] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.132 [2024-07-25 19:07:12.831660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.132 [2024-07-25 19:07:12.831692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.132 [2024-07-25 19:07:12.831711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.132 [2024-07-25 19:07:12.831950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.132 [2024-07-25 19:07:12.832210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.832236] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.832252] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.835814] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.845288] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.845716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.845748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.845766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.846005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.846259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.846284] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.846300] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.849868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.859139] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.859539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.859570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.859588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.859827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.860082] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.860107] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.860123] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.863689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.872962] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.873374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.873406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.873424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.873663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.873907] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.873931] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.873946] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.877524] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.886799] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.887205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.887237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.887255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.887500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.887743] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.887768] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.887783] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.891359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.900835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.901243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.901274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.901291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.901530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.901773] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.901797] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.901813] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.905390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.914862] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.915268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.915299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.915317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.915555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.915798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.915822] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.915838] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.919418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.928693] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.929107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.929138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.929155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.929394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.929638] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.929661] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.929685] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.933264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.942528] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.942955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.942986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.943004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.943255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.943499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.943524] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.943540] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.947118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.956391] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.956798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.956828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.956846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.957097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.957341] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.957365] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.133 [2024-07-25 19:07:12.957381] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.133 [2024-07-25 19:07:12.960944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.133 [2024-07-25 19:07:12.970219] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.133 [2024-07-25 19:07:12.970592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.133 [2024-07-25 19:07:12.970623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.133 [2024-07-25 19:07:12.970641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.133 [2024-07-25 19:07:12.970879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.133 [2024-07-25 19:07:12.971135] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.133 [2024-07-25 19:07:12.971160] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.134 [2024-07-25 19:07:12.971175] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.134 [2024-07-25 19:07:12.974739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.134 [2024-07-25 19:07:12.984223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.134 [2024-07-25 19:07:12.984621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.134 [2024-07-25 19:07:12.984656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.134 [2024-07-25 19:07:12.984674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.134 [2024-07-25 19:07:12.984913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.134 [2024-07-25 19:07:12.985168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.134 [2024-07-25 19:07:12.985192] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.134 [2024-07-25 19:07:12.985208] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.134 [2024-07-25 19:07:12.988772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.134 [2024-07-25 19:07:12.998041] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.134 [2024-07-25 19:07:12.998451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.134 [2024-07-25 19:07:12.998482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.134 [2024-07-25 19:07:12.998500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.134 [2024-07-25 19:07:12.998739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.134 [2024-07-25 19:07:12.998982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.134 [2024-07-25 19:07:12.999006] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.134 [2024-07-25 19:07:12.999022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.134 [2024-07-25 19:07:13.002599] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.394 [2024-07-25 19:07:13.011876] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.394 [2024-07-25 19:07:13.012269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.394 [2024-07-25 19:07:13.012299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.394 [2024-07-25 19:07:13.012317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.394 [2024-07-25 19:07:13.012556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.394 [2024-07-25 19:07:13.012799] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.394 [2024-07-25 19:07:13.012823] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.394 [2024-07-25 19:07:13.012839] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.394 [2024-07-25 19:07:13.016424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.394 [2024-07-25 19:07:13.025901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.394 [2024-07-25 19:07:13.026285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.394 [2024-07-25 19:07:13.026316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.394 [2024-07-25 19:07:13.026334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.394 [2024-07-25 19:07:13.026573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.394 [2024-07-25 19:07:13.026821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.394 [2024-07-25 19:07:13.026845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.394 [2024-07-25 19:07:13.026860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.394 [2024-07-25 19:07:13.030445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.394 [2024-07-25 19:07:13.039925] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.394 [2024-07-25 19:07:13.040311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.394 [2024-07-25 19:07:13.040343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.394 [2024-07-25 19:07:13.040360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.394 [2024-07-25 19:07:13.040599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.394 [2024-07-25 19:07:13.040842] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.394 [2024-07-25 19:07:13.040866] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.394 [2024-07-25 19:07:13.040882] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.394 [2024-07-25 19:07:13.044461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.394 [2024-07-25 19:07:13.053942] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.394 [2024-07-25 19:07:13.054388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.394 [2024-07-25 19:07:13.054421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.394 [2024-07-25 19:07:13.054448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.394 [2024-07-25 19:07:13.054695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.394 [2024-07-25 19:07:13.054954] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.394 [2024-07-25 19:07:13.054980] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.394 [2024-07-25 19:07:13.054996] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.394 [2024-07-25 19:07:13.058749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.394 [2024-07-25 19:07:13.067939] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.394 [2024-07-25 19:07:13.068315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.394 [2024-07-25 19:07:13.068349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.394 [2024-07-25 19:07:13.068369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.394 [2024-07-25 19:07:13.068609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.394 [2024-07-25 19:07:13.068853] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.394 [2024-07-25 19:07:13.068878] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.394 [2024-07-25 19:07:13.068895] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.394 [2024-07-25 19:07:13.072502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.394 [2024-07-25 19:07:13.081838] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.394 [2024-07-25 19:07:13.082208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.394 [2024-07-25 19:07:13.082241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.394 [2024-07-25 19:07:13.082259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.394 [2024-07-25 19:07:13.082498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.394 [2024-07-25 19:07:13.082741] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.394 [2024-07-25 19:07:13.082765] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.394 [2024-07-25 19:07:13.082781] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.394 [2024-07-25 19:07:13.086366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.394 [2024-07-25 19:07:13.095870] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.394 [2024-07-25 19:07:13.096259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.394 [2024-07-25 19:07:13.096291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.394 [2024-07-25 19:07:13.096309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.394 [2024-07-25 19:07:13.096547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.394 [2024-07-25 19:07:13.096791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.394 [2024-07-25 19:07:13.096815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.394 [2024-07-25 19:07:13.096833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.394 [2024-07-25 19:07:13.100413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.394 [2024-07-25 19:07:13.109722] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.394 [2024-07-25 19:07:13.110134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.110165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.110184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.110422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.110666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.110690] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.110706] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.114288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.123597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.123970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.124001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.124034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.124283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.124528] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.124552] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.124568] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.128146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.137432] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.137817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.137848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.137866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.138116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.138359] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.138384] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.138400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.141969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.151476] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.151827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.151857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.151875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.152124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.152368] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.152392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.152408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.155976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.165471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.165846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.165877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.165895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.166144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.166388] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.166417] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.166434] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.170015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.179522] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.179918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.179949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.179966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.180216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.180459] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.180484] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.180499] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.184080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.193568] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.193981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.194012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.194030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.194279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.194523] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.194547] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.194563] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.198156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.207437] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.207826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.207856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.207874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.208124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.208368] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.208393] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.208408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.211991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.221288] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.221669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.221700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.221719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.221958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.222213] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.222238] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.222254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.225819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.235228] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.235608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.235639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.395 [2024-07-25 19:07:13.235657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.395 [2024-07-25 19:07:13.235897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.395 [2024-07-25 19:07:13.236153] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.395 [2024-07-25 19:07:13.236178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.395 [2024-07-25 19:07:13.236194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.395 [2024-07-25 19:07:13.239760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.395 [2024-07-25 19:07:13.249250] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.395 [2024-07-25 19:07:13.249651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.395 [2024-07-25 19:07:13.249683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.396 [2024-07-25 19:07:13.249701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.396 [2024-07-25 19:07:13.249939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.396 [2024-07-25 19:07:13.250195] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.396 [2024-07-25 19:07:13.250220] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.396 [2024-07-25 19:07:13.250236] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.396 [2024-07-25 19:07:13.253801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.396 [2024-07-25 19:07:13.263086] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.396 [2024-07-25 19:07:13.263460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.396 [2024-07-25 19:07:13.263491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.396 [2024-07-25 19:07:13.263509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.396 [2024-07-25 19:07:13.263753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.396 [2024-07-25 19:07:13.263997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.396 [2024-07-25 19:07:13.264021] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.396 [2024-07-25 19:07:13.264037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.396 [2024-07-25 19:07:13.267617] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.658 [2024-07-25 19:07:13.276923] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.658 [2024-07-25 19:07:13.277302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.658 [2024-07-25 19:07:13.277337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.658 [2024-07-25 19:07:13.277355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.658 [2024-07-25 19:07:13.277593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.658 [2024-07-25 19:07:13.277837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.658 [2024-07-25 19:07:13.277861] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.658 [2024-07-25 19:07:13.277876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3687912 Killed "${NVMF_APP[@]}" "$@" 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.658 [2024-07-25 19:07:13.281461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3688987 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3688987 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3688987 ']' 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:01.658 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.658 [2024-07-25 19:07:13.290952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.658 [2024-07-25 19:07:13.291338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.658 [2024-07-25 19:07:13.291367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.658 [2024-07-25 19:07:13.291385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.658 [2024-07-25 19:07:13.291624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.658 [2024-07-25 19:07:13.291874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.658 [2024-07-25 19:07:13.291899] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.658 [2024-07-25 19:07:13.291915] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.658 [2024-07-25 19:07:13.295492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.658 [2024-07-25 19:07:13.304516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.658 [2024-07-25 19:07:13.304912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.658 [2024-07-25 19:07:13.304942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.658 [2024-07-25 19:07:13.304959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.658 [2024-07-25 19:07:13.305188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.658 [2024-07-25 19:07:13.305432] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.658 [2024-07-25 19:07:13.305455] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.658 [2024-07-25 19:07:13.305470] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.658 [2024-07-25 19:07:13.309030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.658 [2024-07-25 19:07:13.317930] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.658 [2024-07-25 19:07:13.318303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.658 [2024-07-25 19:07:13.318346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.658 [2024-07-25 19:07:13.318364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.658 [2024-07-25 19:07:13.318600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.658 [2024-07-25 19:07:13.318807] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.318827] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.318841] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.321889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.330658] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:01.659 [2024-07-25 19:07:13.330736] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.659 [2024-07-25 19:07:13.331296] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.331698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.331726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.331743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.331973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.332245] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.332274] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.332289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.335497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.344642] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.344997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.345026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.345042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.345281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.345506] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.345528] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.345541] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.348640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.358028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.358486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.358523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.358540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.358770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.358992] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.359012] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.359026] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.362172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.659 [2024-07-25 19:07:13.371506] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.371960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.371989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.372005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.372229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.372481] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.372501] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.372514] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.375681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.384851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.385243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.385272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.385288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.385532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.385739] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.385759] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.385773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.388837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.398219] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.398650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.398679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.398695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.398910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.399160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.399181] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.399195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.399583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:01.659 [2024-07-25 19:07:13.402313] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.411578] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.412179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.412220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.412239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.412494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.412704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.412725] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.412742] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.415813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.425023] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.425425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.425455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.425482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.425712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.425919] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.425940] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.425955] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.429022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.438424] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.438819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.438863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.438881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.439138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.439367] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.439398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.439413] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.442480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.451879] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.452434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.452473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.659 [2024-07-25 19:07:13.452494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.659 [2024-07-25 19:07:13.452735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.659 [2024-07-25 19:07:13.452962] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.659 [2024-07-25 19:07:13.452983] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.659 [2024-07-25 19:07:13.453000] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.659 [2024-07-25 19:07:13.456095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.659 [2024-07-25 19:07:13.465289] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.659 [2024-07-25 19:07:13.465732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.659 [2024-07-25 19:07:13.465763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.660 [2024-07-25 19:07:13.465780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.660 [2024-07-25 19:07:13.466014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.660 [2024-07-25 19:07:13.466270] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.660 [2024-07-25 19:07:13.466304] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.660 [2024-07-25 19:07:13.466320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.660 [2024-07-25 19:07:13.469428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.660 [2024-07-25 19:07:13.478614] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.660 [2024-07-25 19:07:13.479069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.660 [2024-07-25 19:07:13.479099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.660 [2024-07-25 19:07:13.479116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.660 [2024-07-25 19:07:13.479346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.660 [2024-07-25 19:07:13.479577] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.660 [2024-07-25 19:07:13.479598] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.660 [2024-07-25 19:07:13.479612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.660 [2024-07-25 19:07:13.482678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.660 [2024-07-25 19:07:13.485450] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.660 [2024-07-25 19:07:13.485479] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.660 [2024-07-25 19:07:13.485508] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.660 [2024-07-25 19:07:13.485519] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.660 [2024-07-25 19:07:13.485530] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.660 [2024-07-25 19:07:13.485585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:01.660 [2024-07-25 19:07:13.485642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:01.660 [2024-07-25 19:07:13.485645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.660 [2024-07-25 19:07:13.492251] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.660 [2024-07-25 19:07:13.492710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.660 [2024-07-25 19:07:13.492746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.660 [2024-07-25 19:07:13.492765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.660 [2024-07-25 19:07:13.492988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.660 [2024-07-25 19:07:13.493228] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.660 [2024-07-25 19:07:13.493251] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.660 [2024-07-25 19:07:13.493269] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.660 [2024-07-25 19:07:13.496534] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.660 [2024-07-25 19:07:13.505786] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.660 [2024-07-25 19:07:13.506333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.660 [2024-07-25 19:07:13.506372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.660 [2024-07-25 19:07:13.506401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.660 [2024-07-25 19:07:13.506626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.660 [2024-07-25 19:07:13.506850] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.660 [2024-07-25 19:07:13.506872] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.660 [2024-07-25 19:07:13.506889] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.660 [2024-07-25 19:07:13.510104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.660 [2024-07-25 19:07:13.519516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.660 [2024-07-25 19:07:13.520124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.660 [2024-07-25 19:07:13.520169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.660 [2024-07-25 19:07:13.520190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.660 [2024-07-25 19:07:13.520415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.660 [2024-07-25 19:07:13.520639] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.660 [2024-07-25 19:07:13.520661] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.660 [2024-07-25 19:07:13.520678] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.660 [2024-07-25 19:07:13.523953] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.660 [2024-07-25 19:07:13.533294] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.921 [2024-07-25 19:07:13.533843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.921 [2024-07-25 19:07:13.533889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.921 [2024-07-25 19:07:13.533910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.921 [2024-07-25 19:07:13.534145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.921 [2024-07-25 19:07:13.534371] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.921 [2024-07-25 19:07:13.534393] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.921 [2024-07-25 19:07:13.534411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.921 [2024-07-25 19:07:13.537655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.921 [2024-07-25 19:07:13.546888] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.921 [2024-07-25 19:07:13.547407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.921 [2024-07-25 19:07:13.547446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.921 [2024-07-25 19:07:13.547465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.921 [2024-07-25 19:07:13.547689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.921 [2024-07-25 19:07:13.547911] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.921 [2024-07-25 19:07:13.547944] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.921 [2024-07-25 19:07:13.547961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.921 [2024-07-25 19:07:13.551211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.921 [2024-07-25 19:07:13.560617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.921 [2024-07-25 19:07:13.561248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.921 [2024-07-25 19:07:13.561294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.921 [2024-07-25 19:07:13.561314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.921 [2024-07-25 19:07:13.561556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.921 [2024-07-25 19:07:13.561789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.921 [2024-07-25 19:07:13.561812] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.921 [2024-07-25 19:07:13.561829] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.921 [2024-07-25 19:07:13.565221] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.921 [2024-07-25 19:07:13.574246] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.921 [2024-07-25 19:07:13.574649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.921 [2024-07-25 19:07:13.574683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.921 [2024-07-25 19:07:13.574701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.921 [2024-07-25 19:07:13.574919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.921 [2024-07-25 19:07:13.575155] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.921 [2024-07-25 19:07:13.575178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.575194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-25 19:07:13.578456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-25 19:07:13.587838] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-25 19:07:13.588202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-25 19:07:13.588232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-25 19:07:13.588249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.922 [2024-07-25 19:07:13.588465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.922 [2024-07-25 19:07:13.588684] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-25 19:07:13.588706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.588720] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-25 19:07:13.591935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:01.922 [2024-07-25 19:07:13.601448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.922 [2024-07-25 19:07:13.601820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-25 19:07:13.601849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-25 19:07:13.601865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.922 [2024-07-25 19:07:13.602092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.922 [2024-07-25 19:07:13.602313] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-25 19:07:13.602335] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.602349] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-25 19:07:13.605617] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-25 19:07:13.615027] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-25 19:07:13.615410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-25 19:07:13.615439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-25 19:07:13.615455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.922 [2024-07-25 19:07:13.615670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.922 [2024-07-25 19:07:13.615890] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-25 19:07:13.615912] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.615927] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-25 19:07:13.619176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.922 [2024-07-25 19:07:13.628594] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-25 19:07:13.628985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-25 19:07:13.629013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-25 19:07:13.629030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.922 [2024-07-25 19:07:13.629252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.922 [2024-07-25 19:07:13.629472] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-25 19:07:13.629499] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.629515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-25 19:07:13.631525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.922 [2024-07-25 19:07:13.632817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.922 [2024-07-25 19:07:13.642217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-25 19:07:13.642588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-25 19:07:13.642615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-25 19:07:13.642632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.922 [2024-07-25 19:07:13.642860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.922 [2024-07-25 19:07:13.643099] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-25 19:07:13.643132] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.643146] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-25 19:07:13.646370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-25 19:07:13.655744] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-25 19:07:13.656171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-25 19:07:13.656209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-25 19:07:13.656227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.922 [2024-07-25 19:07:13.656447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.922 [2024-07-25 19:07:13.656668] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-25 19:07:13.656691] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.656707] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-25 19:07:13.659948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-25 19:07:13.669372] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-25 19:07:13.669946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-25 19:07:13.669987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-25 19:07:13.670008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.922 [2024-07-25 19:07:13.670241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.922 [2024-07-25 19:07:13.670465] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-25 19:07:13.670497] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.670515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 Malloc0 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.922 [2024-07-25 19:07:13.673784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.922 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.922 [2024-07-25 19:07:13.682947] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-25 19:07:13.683307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-25 19:07:13.683335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77e70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-25 19:07:13.683352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc77e70 is same with the state(5) to be set 00:34:01.922 [2024-07-25 19:07:13.683568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc77e70 (9): Bad file descriptor 00:34:01.922 [2024-07-25 19:07:13.683787] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-25 19:07:13.683810] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-25 19:07:13.683824] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-25 19:07:13.687073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.923 19:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.923 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.923 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.923 [2024-07-25 19:07:13.691678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.923 19:07:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.923 19:07:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3688203 00:34:01.923 [2024-07-25 19:07:13.696654] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.182 [2024-07-25 19:07:13.818908] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:12.161 00:34:12.161 Latency(us) 00:34:12.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:12.161 Verification LBA range: start 0x0 length 0x4000 00:34:12.161 Nvme1n1 : 15.01 6650.82 25.98 8778.21 0.00 8270.34 831.34 23010.42 00:34:12.161 =================================================================================================================== 00:34:12.161 Total : 6650.82 25.98 8778.21 0.00 8270.34 831.34 23010.42 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:12.161 rmmod nvme_tcp 00:34:12.161 rmmod nvme_fabrics 00:34:12.161 rmmod nvme_keyring 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3688987 ']' 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3688987 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3688987 ']' 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3688987 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:12.161 19:07:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3688987 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3688987' 00:34:12.161 killing process with pid 3688987 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3688987 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3688987 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:12.161 19:07:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.543 19:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:13.543 00:34:13.543 real 0m22.460s 00:34:13.543 user 0m59.768s 00:34:13.543 sys 0m4.414s 00:34:13.543 19:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:13.543 19:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.543 ************************************ 00:34:13.543 END TEST nvmf_bdevperf 00:34:13.543 ************************************ 00:34:13.543 19:07:25 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:13.543 19:07:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:13.543 19:07:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:13.543 19:07:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.543 ************************************ 00:34:13.543 START TEST nvmf_target_disconnect 00:34:13.543 ************************************ 00:34:13.543 19:07:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:13.801 * Looking for test storage... 00:34:13.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.801 19:07:25 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:13.802 19:07:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.706 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:15.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:15.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:15.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:15.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:15.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:34:15.707 00:34:15.707 --- 10.0.0.2 ping statistics --- 00:34:15.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.707 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:34:15.707 00:34:15.707 --- 10.0.0.1 ping statistics --- 00:34:15.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.707 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:15.707 ************************************ 00:34:15.707 START TEST nvmf_target_disconnect_tc1 00:34:15.707 ************************************ 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:15.707 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:15.708 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:15.708 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.966 [2024-07-25 19:07:27.588760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.966 [2024-07-25 19:07:27.588850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x764520 with addr=10.0.0.2, port=4420 00:34:15.966 [2024-07-25 19:07:27.588890] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:15.966 [2024-07-25 19:07:27.588923] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:15.966 [2024-07-25 19:07:27.588939] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:15.966 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:15.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:15.966 Initializing NVMe Controllers 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:15.966 00:34:15.966 real 0m0.098s 00:34:15.966 user 0m0.038s 00:34:15.966 sys 0m0.060s 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.966 ************************************ 00:34:15.966 END TEST nvmf_target_disconnect_tc1 00:34:15.966 ************************************ 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:15.966 ************************************ 00:34:15.966 START TEST nvmf_target_disconnect_tc2 00:34:15.966 ************************************ 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3692025 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3692025 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3692025 ']' 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:15.966 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:15.966 [2024-07-25 19:07:27.705214] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:15.966 [2024-07-25 19:07:27.705301] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.966 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.966 [2024-07-25 19:07:27.769132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.224 [2024-07-25 19:07:27.853929] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.224 [2024-07-25 19:07:27.853982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.224 [2024-07-25 19:07:27.854006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.224 [2024-07-25 19:07:27.854017] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.224 [2024-07-25 19:07:27.854026] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.224 [2024-07-25 19:07:27.854191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:16.224 [2024-07-25 19:07:27.854253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:16.224 [2024-07-25 19:07:27.854317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:16.224 [2024-07-25 19:07:27.854319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.224 19:07:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.224 Malloc0 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.224 [2024-07-25 19:07:28.027221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.224 [2024-07-25 19:07:28.055490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3692168 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:16.224 19:07:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.482 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.500 19:07:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3692025 00:34:18.501 19:07:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 [2024-07-25 19:07:30.081493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 [2024-07-25 19:07:30.081802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Read completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.501 Write completed with error (sct=0, sc=8) 00:34:18.501 starting I/O failed 00:34:18.502 [2024-07-25 19:07:30.082108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Read completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 Write completed with error (sct=0, sc=8) 00:34:18.502 starting I/O failed 00:34:18.502 [2024-07-25 19:07:30.082385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:18.502 [2024-07-25 19:07:30.082569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.082611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.082726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.082754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.082894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.082920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.083023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.083055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.083164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.083190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.083302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.083329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.083464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.083489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.083626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.083652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.083781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.083808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.083958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.083984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.084096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.084123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.084216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.084242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.084345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.084371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.084531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.084558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.084757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.084801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.084948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.084974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.085079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.085106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.085224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.085251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.085385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.085412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.085597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.085627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.085760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.085787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.085987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.086013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.086125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.086154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.086265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.502 [2024-07-25 19:07:30.086292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.502 qpair failed and we were unable to recover it. 00:34:18.502 [2024-07-25 19:07:30.086399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.086425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.086558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.086585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.086737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.086763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.086951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.086977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.087083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.087112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.087244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.087270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.087419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.087453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.087566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.087593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.087733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.087760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.087885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.087912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.088010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.088036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.088176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.088204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.088327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.088353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.088456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.088482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.088598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.088624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.088754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.088781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.088954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.088984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.089160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.089187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.089278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.089305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.089419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.089465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.089588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.089614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.089728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.089771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.089902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.089928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.090068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.090110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.090240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.090269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.090390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.090416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.090547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.090574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.090750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.090782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.090897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.090928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.091071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.091115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.091221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.091248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.091374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.091400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.091582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.091609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.091735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.503 [2024-07-25 19:07:30.091761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.503 qpair failed and we were unable to recover it. 00:34:18.503 [2024-07-25 19:07:30.091891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.091935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.092075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.092123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.092249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.092279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.092456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.092501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.092645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.092689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.092811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.092837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.092964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.092990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.093101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.093126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.093305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.093350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.093565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.093592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.093714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.093739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.093906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.093935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.094075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.094126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.094300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.094336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.094467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.094495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.094646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.094690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.094821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.094889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.095003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.095030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.095222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.095250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.095360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.095387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.095527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.095555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.095717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.095745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.095845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.095870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.096003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.096030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.096164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.096190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.096320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.096358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.096520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.096563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.096712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.096739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.096863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.096889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.097043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.097076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.097209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.097235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.097375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.097403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.097531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.097557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.097706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.504 [2024-07-25 19:07:30.097733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.504 qpair failed and we were unable to recover it. 00:34:18.504 [2024-07-25 19:07:30.097884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.097928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.098083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.098116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.098203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.098229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.098351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.098377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.098480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.098505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.098627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.098653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.098748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.098774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.098930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.098955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.099045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.099078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.099182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.099208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.099304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.099340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.099438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.099481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.099622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.099650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.099797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.099843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.099995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.100053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.100202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.100231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.100360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.100388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.100505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.100530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.100631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.100656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.100765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.100791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.100919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.100945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.101074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.101109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.101210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.101237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.101349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.101376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.101501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.101531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.101686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.101713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.101866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.101893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.102064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.102092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.102195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.102222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.102312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.102361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.102539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.102566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.102693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.102719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.102842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.102878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.102987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.103028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.103164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.505 [2024-07-25 19:07:30.103191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.505 qpair failed and we were unable to recover it. 00:34:18.505 [2024-07-25 19:07:30.103311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.103358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.103531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.103558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.103684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.103710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.103892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.103919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.104021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.104047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.104157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.104184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.104313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.104371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.104514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.104542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.104669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.104702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.104840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.104876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.105027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.105055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.105170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.105197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.105324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.105355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.105511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.105541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.105685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.105721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.105889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.105920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.106057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.106116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.106214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.106242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.106377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.106407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.106552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.106610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.106832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.106862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.106996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.107025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.107189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.107216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.107313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.107346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.107519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.107554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.107692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.107723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.107914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.107958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.108062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.108089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.108198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.108225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.506 [2024-07-25 19:07:30.108376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.506 [2024-07-25 19:07:30.108403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.506 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.108515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.108542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.108707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.108737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.108850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.108879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.108985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.109014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.109206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.109233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.109376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.109417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.109572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.109619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.109766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.109810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.109948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.109975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.110111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.110137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.110290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.110316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.110468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.110493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.110611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.110639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.110762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.110787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.110896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.110922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.111047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.111080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.111256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.111283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.111440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.111468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.111587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.111611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.111762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.111789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.111889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.111915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.112057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.112116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.112226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.112256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.112422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.112468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.112590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.112620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.112763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.112794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.112928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.112958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.113079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.113109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.113231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.113257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.113381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.113408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.113561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.113591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.113721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.113756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.113895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.113923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.114103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.114130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.114226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.114252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.507 qpair failed and we were unable to recover it. 00:34:18.507 [2024-07-25 19:07:30.114391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.507 [2024-07-25 19:07:30.114419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.114605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.114659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.114787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.114815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.114957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.115004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.115164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.115205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.115338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.115367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.115492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.115537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.115697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.115725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.115873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.115900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.116064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.116092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.116198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.116224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.116328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.116371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.116494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.116539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.116660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.116727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.116830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.116859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.117004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.117031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.117156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.117194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.117304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.117350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.117526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.117557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.117672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.117704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.117836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.117881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.118021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.118070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.118244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.118272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.118399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.118424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.118519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.118545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.118695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.118722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.118846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.118874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.119041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.119076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.119198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.119224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.119377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.119405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.119543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.119571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.119749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.119780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.119922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.119949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.120063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.508 [2024-07-25 19:07:30.120089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.508 qpair failed and we were unable to recover it. 00:34:18.508 [2024-07-25 19:07:30.120214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.120239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.120379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.120408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.120559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.120586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.120710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.120737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.120905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.120934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.121080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.121126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.121235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.121268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.121375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.121405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.121579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.121609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.121848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.121875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.122034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.122066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.122171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.122196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.122317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.122344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.122525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.122555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.122752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.122779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.122937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.122965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.123069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.123094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.123222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.123249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.123371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.123402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.123566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.123594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.123751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.123778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.123871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.123897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.124014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.124055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.124221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.124250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.124343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.124368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.124503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.124531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.124777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.124845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.125011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.125038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.125144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.125170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.125298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.125331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.125488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.125516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.125644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.125672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.125775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.125801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.125932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.125963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.509 [2024-07-25 19:07:30.126084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.509 [2024-07-25 19:07:30.126111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.509 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.126246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.126274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.126404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.126431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.126534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.126559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.126665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.126691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.126792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.126818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.126938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.126963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.127112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.127139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.127241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.127267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.127416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.127443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.127567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.127594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.127725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.127752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.127873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.127899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.128064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.128091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.128243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.128270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.128392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.128436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.128589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.128616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.128710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.128736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.128888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.128916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.129044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.129081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.129184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.129210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.129364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.129391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.129481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.129507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.129597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.129622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.129774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.129801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.129915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.129944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.130141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.130169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.130264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.130289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.130407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.130432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.130551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.130578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.130704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.130731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.130889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.130916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.131045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.131077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.131184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.131210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.131308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.131334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.131424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.510 [2024-07-25 19:07:30.131449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.510 qpair failed and we were unable to recover it. 00:34:18.510 [2024-07-25 19:07:30.131575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.131602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.131736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.131763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.131866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.131891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.132011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.132037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.132144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.132173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.132273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.132299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.132445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.132471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.132629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.132654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.132785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.132812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.132949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.132977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.133151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.133179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.133303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.133330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.133456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.133500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.133617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.133644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.133772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.133799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.133894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.133920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.134045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.134077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.134168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.134194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.134355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.134397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.134568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.134595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.134718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.134745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.134864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.134890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.135056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.135091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.135268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.135296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.135512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.135566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.135743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.135773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.135922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.135949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.136046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.136076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.511 [2024-07-25 19:07:30.136170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.511 [2024-07-25 19:07:30.136196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.511 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.136325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.136351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.136469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.136496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.136645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.136679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.136815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.136841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.136968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.136995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.137144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.137171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.137299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.137326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.137420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.137445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.137595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.137621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.137718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.137743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.137867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.137894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.138031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.138064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.138194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.138221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.138372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.138399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.138545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.138575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.138748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.138775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.138875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.138900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.139016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.139043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.139146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.139172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.139267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.139292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.139422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.139450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.139608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.139636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.139790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.139817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.139938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.139965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.140113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.140140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.140262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.140289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.140481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.140508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.140635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.140662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.140793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.140820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.140960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.140989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.141142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.141169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.141300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.141343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.141473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.512 [2024-07-25 19:07:30.141502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.512 qpair failed and we were unable to recover it. 00:34:18.512 [2024-07-25 19:07:30.141675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.141702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.141827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.141872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.141991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.142020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.142200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.142227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.142347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.142391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.142544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.142571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.142695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.142721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.142876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.142902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.143047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.143098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.143206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.143232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.143336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.143369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.143461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.143488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.143640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.143667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.143758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.143785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.143906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.143949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.144045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.144077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.144235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.144261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.144435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.144465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.144591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.144619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.144774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.144800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.144949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.144975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.145082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.145109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.145268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.145295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.145410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.145439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.145590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.145616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.145768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.145794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.145927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.145956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.146082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.146109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.146239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.146265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.146414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.146440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.146590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.146616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.146735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.146762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.146880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.146910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.147079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.147126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.147255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.513 [2024-07-25 19:07:30.147282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.513 qpair failed and we were unable to recover it. 00:34:18.513 [2024-07-25 19:07:30.147441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.147467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.147595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.147622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.147774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.147804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.147924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.147953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.148110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.148137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.148266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.148293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.148411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.148437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.148594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.148620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.148757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.148783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.148917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.148944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.149039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.149085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.149205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.149232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.149378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.149408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.149528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.149554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.149663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.149690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.149843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.149870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.150041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.150079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.150230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.150256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.150415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.150441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.150572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.150598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.150714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.150740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.150860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.150886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.151011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.151038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.151169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.151196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.151323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.151349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.151438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.151464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.151564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.151590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.151710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.151736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.151822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.151849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.151977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.152003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.152196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.152223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.152351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.152377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.152473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.152500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.152591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.152616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.514 qpair failed and we were unable to recover it. 00:34:18.514 [2024-07-25 19:07:30.152746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.514 [2024-07-25 19:07:30.152773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.152880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.152907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.152997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.153023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.153209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.153237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.153358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.153386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.153529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.153572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.153666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.153692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.153797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.153824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.153989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.154015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.154182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.154213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.154347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.154373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.154469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.154495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.154589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.154615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.154748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.154774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.154879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.154906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.155046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.155082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.155199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.155226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.155357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.155383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.155475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.155502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.155617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.155644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.155816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.155846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.155984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.156010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.156144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.156170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.156300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.156327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.156483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.156510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.156615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.156641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.156770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.156797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.156885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.156910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.157035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.157081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.157191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.157217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.157351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.157381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.157504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.157531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.157655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.157682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.157814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.157841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.157941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.157967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.158077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.158113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.515 [2024-07-25 19:07:30.158238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-07-25 19:07:30.158265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.515 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.158425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.158452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.158634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.158663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.158780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.158806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.158928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.158955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.159050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.159084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.159185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.159211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.159362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.159388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.159569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.159598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.159782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.159808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.159982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.160012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.160141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.160168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.160298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.160325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.160452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.160479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.160600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.160626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.160758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.160784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.160872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.160898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.161035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.161070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.161236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.161263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.161388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.161414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.161567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.161593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.161682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.161709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.161814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.161841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.161966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.161992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.162155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.162183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.162351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.162379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.162534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.162561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.162653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.162679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.162814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.162841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.163013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-07-25 19:07:30.163043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.516 qpair failed and we were unable to recover it. 00:34:18.516 [2024-07-25 19:07:30.163203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.163230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.163358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.163384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.163498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.163540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.163687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.163713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.163831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.163857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.163998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.164042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.164186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.164213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.164363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.164389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.164506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.164549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.164653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.164680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.164837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.164864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.165020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.165051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.165227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.165254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.165377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.165403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.165535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.165561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.165661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.165688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.165779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.165805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.165902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.165929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.166051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.166087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.166242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.166285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.166441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.166468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.166585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.166613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.166743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.166770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.166931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.166958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.167098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.167125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.167257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.167283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.167378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.167405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.167554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.167580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.167703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.167729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.167824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.167850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.168004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.168030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.168196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.168222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.168349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.168376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.168471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-07-25 19:07:30.168498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.517 qpair failed and we were unable to recover it. 00:34:18.517 [2024-07-25 19:07:30.168622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.168649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.168788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.168817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.168963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.168989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.169091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.169117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.169271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.169298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.169407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.169434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.169551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.169577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.169744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.169774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.169950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.169976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.170087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.170119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.170250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.170278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.170384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.170411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.170541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.170575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.170699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.170725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.170878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.170904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.171050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.171089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.171198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.171228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.171342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.171368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.171510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.171538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.171666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.171692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.171811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.171838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.171968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.171994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.172103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.172130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.172217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.172244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.172372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.172399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.172515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.172541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.172658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.172685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.172855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.172885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.173070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.173098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.173229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.173255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.173349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.173376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.518 [2024-07-25 19:07:30.173505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.518 [2024-07-25 19:07:30.173532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.518 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.173660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.173687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.173791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.173817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.173919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.173946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.174099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.174126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.174252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.174279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.174459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.174485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.174601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.174628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.174778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.174805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.174925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.174954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.175104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.175131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.175234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.175261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.175350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.175377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.175506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.175533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.175659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.175691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.175818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.175845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.175962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.175988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.176117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.176145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.176338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.176365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.176530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.176557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.176680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.176724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.176855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.176885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.177054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.177169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.177318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.177348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.177489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.177516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.177677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.177704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.177830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.177856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.177980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.178008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.178144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.178171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.178326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.178353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.178555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.178582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.178686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.178712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.178838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.178865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.178987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.179016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.179148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.179175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.179300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.519 [2024-07-25 19:07:30.179327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.519 qpair failed and we were unable to recover it. 00:34:18.519 [2024-07-25 19:07:30.179425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.179451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.179575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.179602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.179754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.179781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.179910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.179936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.180072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.180099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.180220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.180247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.180442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.180468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.180569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.180596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.180725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.180751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.180928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.180955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.181076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.181103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.181254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.181284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.181378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.181404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.181560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.181589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.181739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.181766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.181972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.181998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.182127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.182153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.182279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.182305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.182406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.182432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.182540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.182569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.182699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.182725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.182843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.182869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.183030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.183056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.183221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.183247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.183375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.183402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.183552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.183578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.183702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.183728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.183827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.183853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.184008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.184050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.184184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.184210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.184355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.184385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.184528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.184554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.184701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.184745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.184889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.184931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.185074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.185101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.520 qpair failed and we were unable to recover it. 00:34:18.520 [2024-07-25 19:07:30.185221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.520 [2024-07-25 19:07:30.185247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.185342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.185369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.185523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.185549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.185668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.185698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.185844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.185870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.185978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.186004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.186144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.186171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.186285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.186314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.186463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.186499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.186663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.186706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.186811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.186840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.186982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.187015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.187203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.187231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.187358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.187385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.187520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.187547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.187703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.187730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.187848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.187875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.188005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.188031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.188159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.188188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.188323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.188366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.188516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.188543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.188693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.188720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.188899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.188928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.189068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.189098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.189218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.189245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.189401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.189428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.189609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.189636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.189762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.189789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.189923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.521 [2024-07-25 19:07:30.189966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.521 qpair failed and we were unable to recover it. 00:34:18.521 [2024-07-25 19:07:30.190087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.190131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.190238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.190265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.190369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.190398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.190537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.190563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.190678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.190704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.190868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.190896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.190992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.191019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.191146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.191173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.191303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.191329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.191449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.191476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.191632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.191658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.191793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.191820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.191944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.191971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.192103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.192130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.192252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.192278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.192435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.192461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.192562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.192589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.192712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.192739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.192889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.192918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.193022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.193049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.193197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.193224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.193348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.193378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.193515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.193544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.193687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.193717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.193847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.193874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.193999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.194026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.194197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.194227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.194366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.194393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.194542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.194569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.194738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.194765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.194892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.194918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.195111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.195138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.195270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.195296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.195419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.522 [2024-07-25 19:07:30.195449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.522 qpair failed and we were unable to recover it. 00:34:18.522 [2024-07-25 19:07:30.195548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.195577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.195686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.195712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.195839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.195866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.196014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.196040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.196155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.196182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.196284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.196311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.196403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.196429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.196532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.196558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.196710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.196736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.196903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.196929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.197050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.197125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.197259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.197288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.197452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.197481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.197629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.197660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.197758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.197784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.197880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.197907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.198025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.198056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.198168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.198194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.198314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.198341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.198459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.198488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.198586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.198615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.198764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.198791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.198909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.198935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.199080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.199110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.199225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.199254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.199404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.199430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.199560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.199602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.199775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.199804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.199948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.199977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.200138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.200165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.200297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.200323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.200430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.200456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.200599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.200629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.200778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.523 [2024-07-25 19:07:30.200804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.523 qpair failed and we were unable to recover it. 00:34:18.523 [2024-07-25 19:07:30.200949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.200978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.201109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.201136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.201261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.201287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.201444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.201471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.201572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.201616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.201760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.201789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.201929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.201972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.202095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.202122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.202228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.202254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.202347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.202373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.202462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.202489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.202612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.202638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.202758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.202784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.202881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.202907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.203036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.203071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.203167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.203194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.203332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.203359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.203498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.203525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.203648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.203675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.203827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.203853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.204022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.204052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.204235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.204262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.204385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.204411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.204496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.204526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.204616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.204642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.204736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.204763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.204889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.204915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.205071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.205098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.205203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.205247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.205396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.205423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.205552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.205579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.205669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.205696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.205824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.205850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.205992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.206021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.206208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.524 [2024-07-25 19:07:30.206238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.524 qpair failed and we were unable to recover it. 00:34:18.524 [2024-07-25 19:07:30.206380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.206407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.206531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.206574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.206731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.206761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.206868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.206909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.207067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.207095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.207193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.207220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.207341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.207367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.207463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.207489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.207585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.207612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.207762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.207788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.207952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.207979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.208167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.208194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.208317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.208343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.208450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.208477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.208602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.208628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.208774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.208810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.208952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.208979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.209108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.209135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.209286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.209315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.209475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.209502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.209596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.209622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.209750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.209777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.209928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.209954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.210108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.210135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.210290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.210316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.210434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.210477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.210619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.210649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.210803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.210832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.210980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.211006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.211163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.211190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.211369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.211398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.211517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.211546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.211715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.211742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.211842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.211885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.525 [2024-07-25 19:07:30.212022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.525 [2024-07-25 19:07:30.212051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.525 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.212203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.212230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.212360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.212386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.212513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.212539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.212657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.212684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.212832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.212860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.212979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.213005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.213124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.213151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.213249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.213276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.213406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.213432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.213582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.213608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.213748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.213778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.213873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.213903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.214041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.214078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.214197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.214224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.214323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.214350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.214466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.214495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.214617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.214644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.214770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.214797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.214946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.214993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.215155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.215186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.215322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.215364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.215484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.215514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.215642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.215684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.215813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.215842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.215995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.216021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.216180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.216207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.216343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.216372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.216475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.216504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.216634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.216668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.216806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.216833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.526 qpair failed and we were unable to recover it. 00:34:18.526 [2024-07-25 19:07:30.217017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.526 [2024-07-25 19:07:30.217046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.217255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.217286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.217414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.217443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.217575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.217601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.217731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.217758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.217913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.217940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.218037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.218072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.218169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.218196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.218294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.218321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.218407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.218434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.218528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.218554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.218671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.218698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.218790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.218816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.218936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.218963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.219149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.219179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.219354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.219381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.219503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.219530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.219702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.219731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.219860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.219889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.220038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.220071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.220197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.220224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.220375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.220402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.220546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.220573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.220698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.220724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.220830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.220857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.220986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.221013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.221129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.221161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.221277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.221303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.221431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.221458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.221642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.221672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.221779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.221809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.221980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.222007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.222114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.222159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.222299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.222326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.222453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.222479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.222589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.527 [2024-07-25 19:07:30.222615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.527 qpair failed and we were unable to recover it. 00:34:18.527 [2024-07-25 19:07:30.222736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.222763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.222947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.222973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.223118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.223145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.223237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.223263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.223383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.223410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.223507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.223537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.223638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.223665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.223770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.223796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.223923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.223950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.224136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.224164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.224330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.224377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.224522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.224548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.224695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.224738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.224848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.224877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.225019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.225047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.225201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.225228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.225327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.225353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.225475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.225501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.225637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.225667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.225838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.225865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.225981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.226025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.226149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.226177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.226309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.226335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.226431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.226461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.226594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.226620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.226739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.226768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.226909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.226937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.227070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.227097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.227248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.227274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.227421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.227450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.227586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.227616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.227752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.227778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.227900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.227926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.228078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.228108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.228220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.528 [2024-07-25 19:07:30.228249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.528 qpair failed and we were unable to recover it. 00:34:18.528 [2024-07-25 19:07:30.228373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.228400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.228489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.228516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.228663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.228690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.228798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.228825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.228950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.228977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.229115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.229142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.229280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.229310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.229461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.229487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.229649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.229682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.229818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.229848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.230008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.230037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.230171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.230201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.230352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.230379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.230528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.230555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.230705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.230735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.230918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.230944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.231069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.231096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.231200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.231226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.231314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.231341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.231459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.231485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.231587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.231614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.231761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.231788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.231909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.231939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.232078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.232124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.232248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.232274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.232410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.232453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.232635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.232665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.232803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.232832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.233003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.233030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.233156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.233186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.233313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.233340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.233460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.233486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.233580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.233606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.233734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.233761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.233884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.233914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.529 [2024-07-25 19:07:30.234081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.529 [2024-07-25 19:07:30.234110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.529 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.234232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.234259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.234387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.234413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.234534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.234561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.234729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.234759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.234899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.234926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.235056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.235089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.235245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.235289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.235420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.235491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.235637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.235664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.235791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.235818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.235942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.235968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.236124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.236167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.236287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.236314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.236404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.236429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.236572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.236599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.236735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.236765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.236933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.236962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.237115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.237142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.237260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.237287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.237411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.237440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.237593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.237624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.237782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.237809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.237950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.237979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.238152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.238181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.238310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.238343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.238460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.238486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.238635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.238664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.238779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.238822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.238948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.238974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.239139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.239169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.239274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.239326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.239435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.239461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.530 qpair failed and we were unable to recover it. 00:34:18.530 [2024-07-25 19:07:30.239588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.530 [2024-07-25 19:07:30.239614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.239733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.239759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.239945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.239974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.240112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.240141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.240282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.240308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.240459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.240502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.240613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.240642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.240752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.240781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.240909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.240936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.241070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.241106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.241231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.241258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.241388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.241417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.241563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.241590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.241710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.241736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.241889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.241918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.242066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.242096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.242244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.242271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.242398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.242425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.242556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.242583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.242743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.242770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.242886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.242913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.243071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.243120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.243286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.243315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.243498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.243549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.243698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.243724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.243853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.243879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.531 [2024-07-25 19:07:30.244028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.531 [2024-07-25 19:07:30.244066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.531 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.244236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.244265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.244403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.244429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.244527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.244558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.244687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.244714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.244839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.244866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.245024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.245053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.245229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.245257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.245379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.245432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.245563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.245590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.245742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.245769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.245867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.245893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.246025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.246052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.246194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.246221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.246350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.246377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.246477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.246503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.246632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.246659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.246787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.246813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.246941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.246968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.247119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.247146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.247330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.247359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.247503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.247532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.247668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.247695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.247846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.247890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.248051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.248088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.248199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.248228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.248404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.248431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.248551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.248595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.248715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.248744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.248906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.248935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.249112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.249143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.249241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.249268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.249428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.249455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.249628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.249687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.249839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.249866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.250033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.250083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.250227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.532 [2024-07-25 19:07:30.250257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.532 qpair failed and we were unable to recover it. 00:34:18.532 [2024-07-25 19:07:30.250420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.250450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.250594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.250621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.250755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.250799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.250937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.250966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.251206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.251238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.251363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.251390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.251521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.251547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.251691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.251718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.251882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.251911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.252047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.252084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.252219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.252246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.252356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.252382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.252535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.252561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.252692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.252718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.252844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.252870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.253066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.253094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.253221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.253247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.253383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.253409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.253538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.253580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.253688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.253717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.253843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.253873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.254017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.254044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.254172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.254199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.254326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.254368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.254501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.254527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.254652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.254679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.254807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.254850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.254983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.255012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.255151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.255181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.255331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.255358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.255482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.255511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.255656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.255683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.255789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.255816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.533 qpair failed and we were unable to recover it. 00:34:18.533 [2024-07-25 19:07:30.255955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.533 [2024-07-25 19:07:30.255982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.256123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.256154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.256282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.256309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.256470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.256499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.256603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.256630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.256761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.256787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.256880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.256907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.257056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.257092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.257251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.257277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.257393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.257420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.257600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.257629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.257768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.257798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.257927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.257954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.258113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.258140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.258293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.258346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.258484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.258510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.258683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.258709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.258860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.258887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.259029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.259079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.259243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.259269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.259405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.259431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.259530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.259556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.259678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.259708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.259860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.259886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.259980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.260007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.260123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.260150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.260248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.260274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.260426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.260455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.260580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.260610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.260740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.260767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.260884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.260914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.261019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.261048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.261176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.261203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.261353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.534 [2024-07-25 19:07:30.261380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.534 qpair failed and we were unable to recover it. 00:34:18.534 [2024-07-25 19:07:30.261521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.261550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.261685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.261715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.261856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.261883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.262005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.262031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.262197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.262228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.262379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.262408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.262557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.262583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.262736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.262780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.262932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.262961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.263072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.263102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.263242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.263269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.263384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.263425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.263570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.263600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.263705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.263735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.263908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.263935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.264030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.264056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.264232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.264259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.264351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.264378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.264530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.264557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.264728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.264758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.264856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.264886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.265033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.265068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.265194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.265220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.265321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.265348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.265500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.265527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.265646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.265675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.265786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.265813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.265942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.265969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.266086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.266129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.266231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.266258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.266351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.266378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.266529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.266556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.266685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.266715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.266846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.266875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.267019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.267046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.267226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.267260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.267376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.267406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.535 [2024-07-25 19:07:30.267543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.535 [2024-07-25 19:07:30.267572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.535 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.267724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.267751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.267869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.267895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.268070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.268097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.268228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.268255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.268387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.268414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.268542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.268568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.268731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.268759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.268912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.268945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.269100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.269127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.269237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.269264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.269377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.269406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.269551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.269580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.269730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.269757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.269880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.269906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.269993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.270019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.270139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.270166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.270317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.270343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.270477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.270507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.270644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.270674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.270800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.270829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.271005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.271031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.271195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.271221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.271365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.271394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.271536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.271565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.536 qpair failed and we were unable to recover it. 00:34:18.536 [2024-07-25 19:07:30.271685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.536 [2024-07-25 19:07:30.271711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.271810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.271837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.271973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.272003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.272142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.272171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.272349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.272375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.272501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.272546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.272721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.272747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.272880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.272924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.273071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.273097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.273202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.273228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.273384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.273411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.273522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.273551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.273672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.273698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.273793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.273819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.273986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.274032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.274194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.274226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.274376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.274404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.274515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.274543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.274719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.274747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.274879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.274907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.275048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.275085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.275184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.275211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.275336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.275362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.275492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.275519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.275650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.275676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.275805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.275846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.275989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.276021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.276182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.276215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.276314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.276340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.276469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.276495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.276602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.276632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.276809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.276835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.276980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.277010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.277148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.277176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.277282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.277309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.277438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.277464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.537 qpair failed and we were unable to recover it. 00:34:18.537 [2024-07-25 19:07:30.277588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.537 [2024-07-25 19:07:30.277616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.277736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.277762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.277863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.277890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.278018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.278045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.278145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.278172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.278380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.278425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.278581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.278608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.278739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.278765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.278861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.278888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.279013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.279041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.279188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.279215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.279318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.279363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.279480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.279507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.279631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.279657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.279897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.279949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.280135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.280162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.280314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.280341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.280438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.280481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.280615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.280650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.280765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.280796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.280928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.280954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.281079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.281106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.281202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.281229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.281326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.281371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.281522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.281548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.281675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.281702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.281835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.281861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.281985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.282014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.282168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.282196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.282296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.282322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.282415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.282442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.538 qpair failed and we were unable to recover it. 00:34:18.538 [2024-07-25 19:07:30.282571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.538 [2024-07-25 19:07:30.282599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.282701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.282728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.282855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.282882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.283024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.283053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.283186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.283214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.283315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.283342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.283494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.283520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.283619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.283645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.283776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.283803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.283934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.283961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.284110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.284137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.284294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.284321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.284550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.284610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.284761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.284787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.284923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.284950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.285079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.285105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.285208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.285235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.285366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.285393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.285516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.285560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.285690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.285719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.285851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.285880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.286102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.286129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.286259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.286285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.286391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.286421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.286556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.286586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.286729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.286756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.286907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.286950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.287057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.287097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.287243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.287270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.287399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.287425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.287546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.287573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.287723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.287765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.287869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.287898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.288024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.288051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.288209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.288236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.539 [2024-07-25 19:07:30.288378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.539 [2024-07-25 19:07:30.288408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.539 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.288576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.288644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.288788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.288815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.288943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.288970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.289070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.289096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.289221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.289247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.289356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.289383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.289510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.289537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.289692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.289722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.289921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.289951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.290130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.290157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.290252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.290278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.290385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.290428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.290541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.290570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.290690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.290718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.290872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.290899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.291038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.291076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.291250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.291276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.291395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.291422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.291526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.291552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.291730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.291761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.291924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.291954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.292075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.292102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.292234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.292261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.292393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.292420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.292562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.292592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.292716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.292743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.292866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.292893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.293011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.293040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.540 [2024-07-25 19:07:30.293164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.540 [2024-07-25 19:07:30.293191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.540 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.293311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.293337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.293509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.293539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.293646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.293691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.293832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.293859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.294013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.294042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.294190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.294217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.294325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.294351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.294473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.294499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.294600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.294627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.294777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.294804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.294949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.294979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.295144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.295174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.295292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.295318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.295468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.295495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.295676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.295705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.295826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.295856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.296023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.296050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.296208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.296238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.296378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.296407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.296516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.296546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.296690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.296718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.296844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.296871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.297025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.297076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.297224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.297250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.297350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.297377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.297469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.297497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.297632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.297659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.297799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.297829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.297947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.297992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.298151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.298182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.298310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.298337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.298514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a30f0 is same with the state(5) to be set 00:34:18.541 [2024-07-25 19:07:30.298695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.298736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.298889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.298935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.299052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.299089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.299197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.299223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.299352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.299378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.541 qpair failed and we were unable to recover it. 00:34:18.541 [2024-07-25 19:07:30.299531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.541 [2024-07-25 19:07:30.299569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.299695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.299722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.299849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.299876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.300003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.300030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.300146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.300173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.300296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.300322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.300446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.300476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.300650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.300679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.300817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.300847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.300998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.301025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.301155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.301182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.301282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.301309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.301481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.301508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.301739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.301792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.301931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.301962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.302119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.302146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.302275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.302301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.302471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.302501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.302614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.302644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.302847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.302882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.303047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.303116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.303210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.303237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.303388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.303417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.303535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.303578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.303727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.303756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.303919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.303948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.304119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.304146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.304266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.304293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.304415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.542 [2024-07-25 19:07:30.304446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.542 qpair failed and we were unable to recover it. 00:34:18.542 [2024-07-25 19:07:30.304576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.304606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.304744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.304774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.304940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.304980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.305086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.305122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.305250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.305296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.305450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.305481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.305645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.305675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.305794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.305823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.305935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.305962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.306120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.306147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.306275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.306301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.306516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.306546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.306677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.306706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.306821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.306851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.307012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.307052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.307201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.307230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.307354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.307399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.307524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.307555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.307692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.307722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.307847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.307890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.308065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.308093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.308216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.308242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.308363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.308393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.308585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.308615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.308727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.308757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.308858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.308889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.309068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.309095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.309202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.309228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.309377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.309406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.309519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.309567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.309704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.309738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.309868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.309898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.310014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.310041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.310177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.310204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.310336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.310381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.310557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.310587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.310751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.310781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.543 qpair failed and we were unable to recover it. 00:34:18.543 [2024-07-25 19:07:30.310931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.543 [2024-07-25 19:07:30.310958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.311121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.311149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.311277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.311304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.311433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.311464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.311628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.311658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.311811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.311856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.311995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.312024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.312193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.312221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.312357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.312384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.312508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.312537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.312701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.312731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.312842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.312871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.313012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.313042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.313199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.313226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.313370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.313400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.313501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.313531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.313725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.313755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.313919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.313949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.314128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.314155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.314286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.314312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.314455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.314483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.314670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.314700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.314858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.314887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.315020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.315050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.315221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.315247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.315367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.315394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.315513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.315542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.315675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.315705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.315844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.315875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.315977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.316007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.316129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.316157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.316291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.316334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.316434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.316464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.316627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.316662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.316792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.316833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.316963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.316992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.317140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.317187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.317338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.317383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.317530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.317574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.544 qpair failed and we were unable to recover it. 00:34:18.544 [2024-07-25 19:07:30.317726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.544 [2024-07-25 19:07:30.317772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.317900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.317927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.318048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.318085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.318238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.318283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.318433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.318478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.318624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.318670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.318769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.318797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.318893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.318920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.319076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.319135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.319277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.319308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.319447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.319477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.319666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.319696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.319804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.319834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.319958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.319985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.320138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.320166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.320306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.320336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.320506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.320536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.320700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.320763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.320923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.320952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.321095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.321142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.321262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.321290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.321450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.321500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.321647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.321735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.321895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.321922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.322073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.322110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.322283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.322326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.322476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.322520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.322694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.322738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.322857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.322884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.323034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.323074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.323263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.323293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.323390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.323420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.323567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.323595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.323714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.323741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.323897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.323924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.324027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.324055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.324200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.324227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.324353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.324380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.324499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.324526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.324658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.324685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.545 [2024-07-25 19:07:30.324839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.545 [2024-07-25 19:07:30.324867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.545 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.324991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.325018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.325132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.325160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.325349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.325394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.325565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.325620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.325743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.325770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.325889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.325916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.326043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.326077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.326202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.326232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.326393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.326436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.326573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.326617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.326749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.326775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.326874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.326902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.327027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.327053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.327206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.327250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.327430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.327479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.327627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.327670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.327791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.327817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.327950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.327977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.328076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.328108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.328253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.328297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.328499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.328548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.328702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.328728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.328858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.328885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.329031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.329057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.329196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.329240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.329410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.329457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.329706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.329755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.329884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.329911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.330037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.330069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.330213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.330257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.330442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.330472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.330611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.330655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.330806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.330832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.546 [2024-07-25 19:07:30.330964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.546 [2024-07-25 19:07:30.330991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.546 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.331131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.331177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.331349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.331393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.331565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.331607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.331732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.331758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.331892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.331919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.332074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.332112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.332265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.332309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.332478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.332523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.332633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.332658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.332792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.332817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.332945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.332971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.333067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.333093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.333254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.333298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.333481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.333524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.333660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.333705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.333816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.333844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.333950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.333978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.334118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.334146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.334250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.334278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.334471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.334505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.334646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.334676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.334782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.334811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.334977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.335017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.335131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.335157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.335277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.335308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.335474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.335519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.335632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.335661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.335787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.335815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.335935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.335962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.336119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.336145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.336274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.336301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.336439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.336467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.336573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.336600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.336724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.336752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.547 [2024-07-25 19:07:30.336881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.547 [2024-07-25 19:07:30.336906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.547 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.337026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.337052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.337161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.337185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.337322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.337347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.337447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.337472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.337623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.337647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.337752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.337778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.337881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.337905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.338040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.338073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.338193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.338236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.338370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.338416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.338519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.338544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.338675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.338700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.338855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.338880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.338975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.339001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.339137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.339162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.339303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.831 [2024-07-25 19:07:30.339357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.831 qpair failed and we were unable to recover it. 00:34:18.831 [2024-07-25 19:07:30.339474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.339501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.339649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.339674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.339802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.339832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.339965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.339990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.340123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.340152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.340290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.340336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.340526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.340552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.340679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.340703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.340805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.340831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.340934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.340959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.341088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.341118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.341243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.341268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.341374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.341399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.341525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.341550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.341671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.341696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.341846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.341874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.342006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.342033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.342216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.342265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.342416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.342460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.342559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.342589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.342685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.342711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.342803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.342831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.342930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.342955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.343083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.343110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.343266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.343292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.343378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.343403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.343499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.343525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.343617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.343642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.343729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.343755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.343859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.343885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.344012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.344038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.344196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.344236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.344395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.344423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.344550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.344578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.344691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.344718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.344847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.344874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.345003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.832 [2024-07-25 19:07:30.345030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.832 qpair failed and we were unable to recover it. 00:34:18.832 [2024-07-25 19:07:30.345160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.345208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.345319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.345349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.345513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.345556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.345664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.345691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.345817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.345844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.345939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.345970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.346078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.346107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.346244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.346271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.346375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.346402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.346522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.346549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.346676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.346704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.346813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.346840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.346945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.346973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.347079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.347104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.347202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.347229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.347403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.347434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.347598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.347641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.347772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.347800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.347959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.347986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.348127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.348157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.348322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.348366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.348514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.348559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.348671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.348697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.348826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.348852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.348995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.349035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.349213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.349257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.349383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.349421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.349561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.349598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.349810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.349841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.349963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.349990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.350096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.350123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.350251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.350281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.350404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.350448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.350572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.350603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.350748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.350778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.350914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.350944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.833 qpair failed and we were unable to recover it. 00:34:18.833 [2024-07-25 19:07:30.351071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.833 [2024-07-25 19:07:30.351099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.351204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.351231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.351374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.351404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.351541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.351571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.351684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.351714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.351884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.351913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.352035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.352071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.352210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.352236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.352376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.352420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.352543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.352592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.352733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.352778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.352889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.352914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.353070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.353097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.353231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.353257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.353403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.353447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.353562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.353592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.353740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.353766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.353924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.353950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.354054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.354090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.354228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.354254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.354374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.354417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.354562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.354592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.354712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.354738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.354874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.354901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.355029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.355055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.355200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.355246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.355397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.355442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.355592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.355638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.355762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.355789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.355908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.355934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.356049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.356116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.356275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.356323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.356498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.356527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.834 [2024-07-25 19:07:30.356671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.834 [2024-07-25 19:07:30.356697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.834 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.356825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.356853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.356956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.356982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.357125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.357156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.357319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.357363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.357484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.357529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.357680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.357723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.357840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.357866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.357989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.358016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.358152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.358181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.358304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.358331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.358435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.358462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.358606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.358635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.358854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.358911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.359029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.359065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.359254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.359281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.359432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.359467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.359586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.359615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.359735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.359762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.359945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.359975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.360123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.360150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.360288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.360317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.360437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.360467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.360581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.360611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.360792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.360839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.360970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.360997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.361102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.361130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.361235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.361266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.361411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.361458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.361634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.361678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.361799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.361828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.361952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.361978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.362075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.362101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.362253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.362280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.362440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.362466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.362590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.835 [2024-07-25 19:07:30.362618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.835 qpair failed and we were unable to recover it. 00:34:18.835 [2024-07-25 19:07:30.362767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.362798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.362944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.362972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.363140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.363188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.363306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.363336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.363474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.363500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.363591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.363617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.363743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.363770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.363890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.363930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.364069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.364098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.364227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.364255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.364359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.364387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.364517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.364544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.364665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.364692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.364839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.364885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.365019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.365073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.365222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.365266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.365439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.365483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.365624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.365667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.365823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.365853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.365958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.365984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.366129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.366163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.366276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.366306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.366444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.366473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.366587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.366616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.366791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.366837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.366935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.366962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.367137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.367181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.367327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.367372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.367492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.367522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.836 [2024-07-25 19:07:30.367667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.836 [2024-07-25 19:07:30.367694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.836 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.367816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.367843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.367950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.367976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.368103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.368131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.368286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.368314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.368449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.368477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.368627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.368654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.368784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.368813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.368913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.368940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.369043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.369079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.369233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.369263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.369378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.369408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.369513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.369544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.369679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.369708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.369843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.369872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.370006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.370036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.370189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.370217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.370367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.370412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.370609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.370653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.370880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.370932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.371081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.371126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.371290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.371318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.371461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.371492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.371668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.371729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.371931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.372001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.372148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.372176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.372296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.372329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.372481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.372547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.372677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.372735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.372898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.372925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.373081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.373110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.373197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.373223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.373382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.373409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.373504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.373531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.373685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.373712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.373853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.837 [2024-07-25 19:07:30.373894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.837 qpair failed and we were unable to recover it. 00:34:18.837 [2024-07-25 19:07:30.374046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.374087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.374213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.374243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.374431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.374476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.374626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.374671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.374803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.374830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.374996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.375024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.375171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.375216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.375344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.375387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.375532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.375560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.375686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.375714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.375826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.375867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.376002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.376031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.376147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.376186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.376363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.376393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.376500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.376527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.376663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.376690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.376895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.376925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.377083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.377110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.377273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.377319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.377443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.377492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.377600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.377646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.377884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.377938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.378073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.378149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.378275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.378307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.378431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.378462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.378571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.378601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.378740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.378769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.378910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.378939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.379127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.379157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.379262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.379292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.379434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.379463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.379577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.379623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.379757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.379803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.379936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.379963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.380068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.380096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.380236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.838 [2024-07-25 19:07:30.380281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.838 qpair failed and we were unable to recover it. 00:34:18.838 [2024-07-25 19:07:30.380408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.380452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.380622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.380653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.380754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.380796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.380897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.380924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.381052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.381087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.381183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.381209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.381314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.381358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.381484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.381528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.381691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.381721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.381849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.381893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.382040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.382076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.382200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.382227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.382399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.382429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.382644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.382674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.382809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.382839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.382981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.383010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.383179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.383220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.383329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.383359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.383493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.383537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.383713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.383758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.383859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.383886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.383986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.384013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.384134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.384180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.384276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.384303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.384431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.384459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.384593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.384620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.384749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.384781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.384914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.384941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.385068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.385095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.385218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.385245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.385376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.385403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.385507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.385535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.385642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.385669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.385799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.385826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.385954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.385981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.386085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.386113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.839 [2024-07-25 19:07:30.386275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.839 [2024-07-25 19:07:30.386320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.839 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.386456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.386501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.386593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.386621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.386720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.386746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.386860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.386888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.387013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.387039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.387220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.387275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.387406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.387450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.387587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.387614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.387747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.387773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.387885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.387912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.388005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.388032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.388172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.388218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.388396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.388441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.388567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.388594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.388750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.388776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.388878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.388905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.389034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.389073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.389224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.389255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.389423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.389468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.389596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.389623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.389753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.389780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.389908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.389936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.390039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.390075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.390178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.390204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.390371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.390397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.390505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.390532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.390632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.390658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.390812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.390839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.390960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.390987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.391083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.391114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.391233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.840 [2024-07-25 19:07:30.391278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.840 qpair failed and we were unable to recover it. 00:34:18.840 [2024-07-25 19:07:30.391400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.391430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.391576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.391604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.391754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.391781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.391910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.391936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.392070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.392097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.392218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.392262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.392383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.392427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.392531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.392559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.392665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.392692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.392788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.392815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.392910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.392936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.393091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.393119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.393250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.393277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.393374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.393399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.393501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.393527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.393628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.393655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.393777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.393803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.393913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.393940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.394075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.394102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.394238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.394282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.394429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.394473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.394605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.394631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.394783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.394810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.394962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.394989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.395091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.395118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.395287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.395323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.395445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.395478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.395650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.395681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.395828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.841 [2024-07-25 19:07:30.395863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.841 qpair failed and we were unable to recover it. 00:34:18.841 [2024-07-25 19:07:30.395989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.396027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.396131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.396166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.396291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.396323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.396492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.396537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.396693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.396737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.396889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.396916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.397044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.397082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.397233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.397263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.397451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.397496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.397645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.397690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.397799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.397826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.397930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.397957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.398091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.398118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.398247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.398273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.398426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.398452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.398594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.398621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.398722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.398748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.398898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.398925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.399051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.399087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.399218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.399244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.399381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.399409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.399533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.399560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.399691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.399717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.399827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.399854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.399977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.400004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.400149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.400195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.400331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.400376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.400514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.400543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.400680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.400707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.400811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.400838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.400990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.401017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.401157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.401203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.401347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.401392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.401538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.401580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.842 qpair failed and we were unable to recover it. 00:34:18.842 [2024-07-25 19:07:30.401701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.842 [2024-07-25 19:07:30.401729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.401826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.401852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.401962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.401993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.402098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.402125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.402234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.402261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.402412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.402438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.402567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.402593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.402741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.402768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.402926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.402952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.403082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.403109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.403259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.403285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.403402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.403446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.403576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.403602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.403720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.403747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.403879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.403905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.404043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.404078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.404205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.404251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.404372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.404402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.404573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.404617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.404722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.404749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.404900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.404926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.405092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.405120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.405240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.405286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.405438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.405482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.405586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.405612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.405729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.405755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.405869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.405895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.405996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.406023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.406154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.406180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.406337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.406364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.406496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.406522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.843 [2024-07-25 19:07:30.406619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.843 [2024-07-25 19:07:30.406644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.843 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.406747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.406773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.406899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.406925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.407082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.407109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.407223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.407267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.407381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.407425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.407521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.407548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.407699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.407725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.407832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.407859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.407968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.407994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.408099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.408127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.408275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.408324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.408474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.408504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.408674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.408701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.408828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.408856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.408954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.408981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.409098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.409126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.409274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.409317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.409439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.409484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.409610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.409638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.409793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.409820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.409921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.409949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.410086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.410113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.410235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.410261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.410382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.410409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.410544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.410570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.410667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.410694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.410828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.410854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.410962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.410989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.411109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.411136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.411231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.411257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.411386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.411413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.411570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.411597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.411703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.411730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.411885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.411911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.412070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.844 [2024-07-25 19:07:30.412097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.844 qpair failed and we were unable to recover it. 00:34:18.844 [2024-07-25 19:07:30.412222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.412268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.412419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.412464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.412621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.412648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.412800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.412828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.412977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.413004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.413136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.413166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.413332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.413378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.413509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.413553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.413684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.413710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.413840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.413868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.414008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.414035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.414191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.414240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.414391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.414438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.414592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.414619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.414746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.414772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.414925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.414956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.415090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.415118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.415242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.415268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.415410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.415437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.415571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.415598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.415744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.415769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.415870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.415903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.416029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.416055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.416191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.416235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.416386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.416432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.416575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.416619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.416725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.416752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.416880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.416906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.417067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.417094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.417242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.417287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.417471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.845 [2024-07-25 19:07:30.417516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.845 qpair failed and we were unable to recover it. 00:34:18.845 [2024-07-25 19:07:30.417658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.417705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.417830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.417856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.417950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.417976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.418129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.418175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.418299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.418326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.418455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.418482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.418610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.418635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.418737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.418770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.418897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.418924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.419072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.419099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.419211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.419241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.419440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.419467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.419604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.419630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.419757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.419784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.419938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.419965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.420090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.420117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.420219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.420247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.420353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.420379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.420513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.420539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.420663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.420691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.420815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.420842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.420967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.420993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.421145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.421190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.421336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.421380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.421500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.421532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.421660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.846 [2024-07-25 19:07:30.421687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.846 qpair failed and we were unable to recover it. 00:34:18.846 [2024-07-25 19:07:30.421796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.421823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.421976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.422003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.422153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.422198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.422347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.422392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.422534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.422583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.422677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.422704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.422829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.422856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.422982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.423009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.423158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.423203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.423415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.423442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.423566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.423594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.423753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.423780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.423910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.423937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.424038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.424073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.424193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.424220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.424318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.424345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.424475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.424501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.424633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.424660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.424796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.424823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.424976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.425003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.425142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.425186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.425340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.425386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.425534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.425563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.425687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.425715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.425848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.425876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.425985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.426012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.426205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.426250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.426397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.426442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.426586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.426634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.426729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.426756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.426889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.426916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.427022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.427049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.427190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.427218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.427340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.847 [2024-07-25 19:07:30.427371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.847 qpair failed and we were unable to recover it. 00:34:18.847 [2024-07-25 19:07:30.427542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.427569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.427661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.427686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.427789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.427816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.427951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.427977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.428111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.428143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.428246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.428274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.428402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.428430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.428524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.428551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.428679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.428706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.428832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.428859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.428981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.429008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.429111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.429137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.429274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.429303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.429465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.429509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.429612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.429638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.429757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.429784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.429882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.429909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.430043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.430076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.430232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.430278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.430453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.430483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.430629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.430656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.430787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.430815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.430938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.430964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.431122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.431148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.431322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.431349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.431478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.431505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.431662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.431689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.431816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.431843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.431942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.431969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.432076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.432104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.432255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.432281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.432455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.432499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.848 qpair failed and we were unable to recover it. 00:34:18.848 [2024-07-25 19:07:30.432615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.848 [2024-07-25 19:07:30.432646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.432787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.432818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.432946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.432972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.433078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.433124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.433230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.433260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.433383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.433428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.433598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.433636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.433779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.433809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.433925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.433962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.434134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.434165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.434306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.434336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.434482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.434511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.434687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.434731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.434881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.434911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.435070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.435097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.435217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.435263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.435386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.435431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.435552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.435595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.435780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.435808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.435937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.435965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.436096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.436123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.436220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.436247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.436344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.436371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.436499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.436527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.436681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.436708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.436809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.436835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.436955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.436995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.437111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.437141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.437305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.437332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.437493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.437548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.437676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.437733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.437921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.437978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.438109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.438138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.438261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.438304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.438445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.438492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.849 qpair failed and we were unable to recover it. 00:34:18.849 [2024-07-25 19:07:30.438631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.849 [2024-07-25 19:07:30.438677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.438809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.438835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.438938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.438964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.439119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.439146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.439303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.439333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.439438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.439464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.439613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.439641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.439768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.439794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.439925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.439952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.440074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.440101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.440253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.440280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.440428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.440459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.440619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.440646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.440751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.440777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.440880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.440907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.441030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.441056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.441204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.441232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.441355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.441384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.441518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.441562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.441713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.441739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.441896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.441923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.442024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.442051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.442168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.442194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.442328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.442355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.442508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.442537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.442693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.442719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.442845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.442872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.442997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.443023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.443207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.443252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.443370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.443402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.443580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.443609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.443779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.443830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.443962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.443989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.444119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.444145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.850 qpair failed and we were unable to recover it. 00:34:18.850 [2024-07-25 19:07:30.444291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.850 [2024-07-25 19:07:30.444321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.444474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.444504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.444644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.444674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.444788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.444817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.444952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.444982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.445132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.445159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.445334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.445362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.445517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.445546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.445686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.445715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.445950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.445980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.446095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.446143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.446252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.446279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.446434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.446461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.446606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.446636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.446786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.446828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.446968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.446994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.447099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.447128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.447233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.447260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.447383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.447412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.447561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.447590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.447727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.447756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.447898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.447927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.448069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.448114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.448214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.448240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.448356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.448399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.448544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.448573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.448712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.448741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.851 [2024-07-25 19:07:30.448881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.851 [2024-07-25 19:07:30.448910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.851 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.449096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.449123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.449262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.449289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.449434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.449464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.449564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.449593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.449739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.449769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.449932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.449962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.450113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.450139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.450239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.450266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.450393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.450437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.450583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.450615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.450780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.450824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.450976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.451002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.451127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.451154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.451265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.451291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.451468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.451513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.451619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.451645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.451768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.451793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.451925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.451954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.452086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.452113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.452218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.452245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.452395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.452425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.452528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.452570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.452718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.452753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.452897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.452924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.453021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.453049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.453169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.453196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.453357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.453387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.453524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.453554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.453768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.453798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.453953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.453981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.454091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.454117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.454261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.454305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.454502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.454554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.454693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.852 [2024-07-25 19:07:30.454723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.852 qpair failed and we were unable to recover it. 00:34:18.852 [2024-07-25 19:07:30.454866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.454892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.454996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.455023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.455165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.455192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.455404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.455434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.455572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.455603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.455770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.455800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.455938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.455969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.456111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.456139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.456270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.456297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.456417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.456447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.456569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.456612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.456751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.456781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.456926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.456955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.457081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.457108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.457313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.457360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.457535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.457565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.457680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.457724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.457871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.457901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.458043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.458082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.458235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.458262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.458386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.458413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.458506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.458533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.458698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.458727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.458863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.458892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.459033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.459071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.459231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.459258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.459405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.459435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.459573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.459603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.459759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.459794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.459936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.459966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.460098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.460126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.460261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.460288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.460437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.460467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.460581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.460611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.460731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.460774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.460893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.460922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.461026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.853 [2024-07-25 19:07:30.461056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.853 qpair failed and we were unable to recover it. 00:34:18.853 [2024-07-25 19:07:30.461221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.461248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.461399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.461426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.461573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.461603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.461759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.461789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.461900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.461931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.462130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.462158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.462257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.462285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.462394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.462424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.462574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.462600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.462784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.462814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.462942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.462972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.463082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.463129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.463250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.463277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.463459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.463489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.463592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.463635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.463768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.463811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.463952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.463982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.464138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.464166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.464323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.464383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.464534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.464581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.464756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.464801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.464901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.464928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.465065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.465094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.465239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.465268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.465398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.465426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.465591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.465618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.465724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.465751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.465881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.465907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.854 [2024-07-25 19:07:30.466013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.854 [2024-07-25 19:07:30.466041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.854 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.466149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.466176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.466288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.466327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.466459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.466494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.466600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.466631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.466747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.466795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.466917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.466944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.467047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.467081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.467225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.467269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.467385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.467412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.467549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.467576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.467786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.467814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.467944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.467972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.468133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.468161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.468289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.468326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.468455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.468482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.468609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.468649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.468784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.468811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.468929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.468956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.469086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.469116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.469225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.469252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.469353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.469381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.469511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.469538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.469693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.469720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.469826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.469853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.469988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.470015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.470202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.470248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.470366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.470396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.470526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.470570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.470720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.470747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.855 qpair failed and we were unable to recover it. 00:34:18.855 [2024-07-25 19:07:30.470863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.855 [2024-07-25 19:07:30.470890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.471113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.471143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.471342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.471386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.471559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.471609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.471734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.471761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.471881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.471908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.472013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.472041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.472177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.472224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.472343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.472388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.472543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.472579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.472714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.472741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.472848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.472875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.473026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.473053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.473172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.473203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.473308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.473336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.473489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.473516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.473622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.473650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.473781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.473807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.473960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.473987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.474102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.474129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.474260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.474307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.474440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.474466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.474618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.474645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.474774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.474802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.474892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.474919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.475064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.475123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.475253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.475297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.475455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.475488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.475632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.475663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.475806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.475837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.475976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.476006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.476153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.476180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.476292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.476319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.476473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.476503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.476664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.476694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.476819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.856 [2024-07-25 19:07:30.476850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.856 qpair failed and we were unable to recover it. 00:34:18.856 [2024-07-25 19:07:30.477017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.477047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.477223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.477263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.477460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.477500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.477635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.477667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.477809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.477840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.477985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.478012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.478124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.478152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.478275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.478302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.478510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.478540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.478674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.478704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.478821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.478851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.478987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.479014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.479148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.479176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.479304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.479331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.479451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.479481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.479622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.479651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.479799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.479844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.479982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.480008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.480121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.480148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.480245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.480272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.480402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.480430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.857 qpair failed and we were unable to recover it. 00:34:18.857 [2024-07-25 19:07:30.480562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.857 [2024-07-25 19:07:30.480607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.480745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.480774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.480897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.480951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.481096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.481143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.481269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.481296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.481460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.481487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.481638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.481668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.481773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.481803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.481956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.481982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.482114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.482141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.482249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.482276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.482365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.482390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.482501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.482531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.482694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.482723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.482877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.482936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.483038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.483073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.483242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.483269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.483424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.483454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.483625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.483670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.483786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.483829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.483962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.483991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.484129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.484157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.484266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.484293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.484523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.484558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.484776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.858 [2024-07-25 19:07:30.484806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.858 qpair failed and we were unable to recover it. 00:34:18.858 [2024-07-25 19:07:30.484938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.484965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.485105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.485134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.485240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.485267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.485422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.485452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.485619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.485648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.485772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.485815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.485919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.485949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.486097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.486124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.486244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.486274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.486433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.486463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.486676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.486706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.486817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.486848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.486997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.487028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.487172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.487213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.487364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.487395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.487533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.487577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.487727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.487770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.487907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.487935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.488034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.488067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.488222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.488254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.488471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.488501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.488646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.488675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.488806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.488837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.488988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.489015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.489165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.489206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.489333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.489364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.489475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.489505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.489617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.489644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.489774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.489804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.489937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.489966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.490123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.490151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.490250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.490279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.490378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.490405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.490522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.490568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.490718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.490745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.490845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.490872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.490970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.859 [2024-07-25 19:07:30.491000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.859 qpair failed and we were unable to recover it. 00:34:18.859 [2024-07-25 19:07:30.491169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.491196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.491323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.491354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.491492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.491521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.491657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.491686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.491840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.491867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.491962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.491988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.492086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.492125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.492238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.492268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.492398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.492426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.492529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.492558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.492673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.492702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.492816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.492846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.492992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.493019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.493167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.493194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.493304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.493344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.493456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.493486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.493627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.493656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.493770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.493813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.493957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.494001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.494155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.494185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.494311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.494340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.494548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.494575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.494728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.494757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.494894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.494925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.495040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.495078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.495198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.495225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.495348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.495379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.495517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.495548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.495680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.495715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.495864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.495924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.496078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.496119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.496238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.496266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.496424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.496455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.496624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.496675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.496810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.496839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.497016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.497044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.497192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.497219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.497316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.860 [2024-07-25 19:07:30.497343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.860 qpair failed and we were unable to recover it. 00:34:18.860 [2024-07-25 19:07:30.497550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.497580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.497716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.497745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.497860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.497889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.498026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.498056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.498198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.498224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.498354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.498381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.498494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.498524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.498729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.498759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.498897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.498926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.499125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.499166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.499329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.499358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.499463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.499491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.499642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.499669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.499807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.499837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.499974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.500003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.500163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.500191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.500355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.500411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.500593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.500644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.500803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.500873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.501000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.501026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.501152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.501180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.501322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.501372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.501526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.501553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.501704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.501730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.501856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.501883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.501990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.502018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.502138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.502165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.502288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.502323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.502496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.502548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.502698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.502728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.502865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.502894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.503081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.503117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.503248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.503274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.503398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.503425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.503574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.503605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.503736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.503765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.503921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.503947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.504109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.861 [2024-07-25 19:07:30.504139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.861 qpair failed and we were unable to recover it. 00:34:18.861 [2024-07-25 19:07:30.504260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.504292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.504481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.504513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.504684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.504714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.504821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.504852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.505001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.505029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.505163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.505191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.505293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.505344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.505521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.505550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.505712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.505742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.505840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.505882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.506036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.506068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.506199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.506225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.506370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.506401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.506567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.506596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.506737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.506766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.506900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.506929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.507035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.507071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.507213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.507239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.507331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.507357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.507526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.507555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.507687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.507733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.507897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.507926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.508039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.508076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.508206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.508233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.508330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.508375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.508540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.508568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.508679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.508708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.508816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.508844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.508988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.509016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.509197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.862 [2024-07-25 19:07:30.509224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.862 qpair failed and we were unable to recover it. 00:34:18.862 [2024-07-25 19:07:30.509364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.509393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.509556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.509584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.509711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.509740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.509866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.509895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.510073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.510100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.510194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.510220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.510380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.510406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.510544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.510587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.510749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.510779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.510920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.510949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.511071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.511097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.511193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.511219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.511360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.511389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.511579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.511608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.511767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.511796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.511923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.511952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.512081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.512109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.512260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.512291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.512400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.512429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.512536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.512579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.512737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.512767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.512880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.512909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.513080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.513107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.513199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.513225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.513324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.513351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.513482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.513511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.513648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.513678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.513807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.513837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.513967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.514007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.514150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.514178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.514303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.514330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.514457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.514501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.514754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.514805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.514901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.514927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.515105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.515133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.515263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.515290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.515418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.515445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.863 [2024-07-25 19:07:30.515604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.863 [2024-07-25 19:07:30.515634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.863 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.515806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.515857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.515969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.516000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.516151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.516179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.516328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.516357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.516542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.516586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.516697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.516740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.516839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.516870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.516996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.517022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.517179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.517209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.517328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.517354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.517504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.517534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.517662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.517691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.517823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.517852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.517992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.518021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.518204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.518233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.518384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.518429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.518582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.518628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.518752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.518797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.518895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.518923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.519052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.519088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.519204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.519248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.519380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.519423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.519573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.519617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.519749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.519776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.519911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.519938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.520075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.520102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.520245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.520290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.520435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.520480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.520633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.520659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.520815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.520841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.520972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.520999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.521137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.521181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.521318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.521362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.521510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.521554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.521706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.521733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.521830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.521857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.521985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.522011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.864 qpair failed and we were unable to recover it. 00:34:18.864 [2024-07-25 19:07:30.522158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.864 [2024-07-25 19:07:30.522202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.522377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.522425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.522578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.522622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.522729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.522757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.522912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.522939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.523256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.523285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.523459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.523489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.523663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.523690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.523841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.523868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.524017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.524049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.524206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.524250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.524395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.524439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.524584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.524629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.524749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.524776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.524901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.524928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.525057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.525094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.525271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.525316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.525461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.525507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.525635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.525662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.525781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.525821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.525947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.525975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.526122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.526152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.526264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.526290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.526422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.526465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.526581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.526610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.526773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.526802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.526965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.526994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.527135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.527162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.527275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.527304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.527436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.527465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.527636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.527665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.527787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.527813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.527948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.527975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.528107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.528133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.528272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.528301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.528471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.528501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.528663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.528699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.528866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.865 [2024-07-25 19:07:30.528895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.865 qpair failed and we were unable to recover it. 00:34:18.865 [2024-07-25 19:07:30.529033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.529068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.529221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.529247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.529466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.529521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.529700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.529744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.529892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.529937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.530071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.530099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.530250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.530276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.530451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.530494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.530676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.530727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.530836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.530863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.530988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.531016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.531163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.531191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.531354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.531380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.531509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.531536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.531703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.531732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.531862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.531891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.531996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.532026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.532148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.532175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.532307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.532333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.532477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.532506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.532675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.532705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.532865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.532894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.532991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.533020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.533193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.533220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.533387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.533416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.533533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.533566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.533765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.533794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.533927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.533969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.534101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.534129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.534247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.534274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.534426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.534455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.534601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.534631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.534792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.534822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.534984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.535014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.535164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.535191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.535331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.535372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.535496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.535546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.535690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.866 [2024-07-25 19:07:30.535734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.866 qpair failed and we were unable to recover it. 00:34:18.866 [2024-07-25 19:07:30.535850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.535879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.536035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.536071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.536222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.536249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.536411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.536438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.536565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.536592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.536741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.536768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.536871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.536899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.537029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.537056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.537229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.537258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.537546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.537576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.537715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.537744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.537883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.537914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.538065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.538103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.538252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.538278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.538422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.538453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.538635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.538665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.538784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.538826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.539001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.539031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.539199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.539226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.539329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.539358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.539513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.539540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.539666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.539710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.539871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.539901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.540016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.540046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.540194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.540221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.540315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.540343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.540520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.540550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.540667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.540710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.540874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.540903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.541073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.541118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.541256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.541282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.867 [2024-07-25 19:07:30.541490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.867 [2024-07-25 19:07:30.541535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.867 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.541691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.541724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.541923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.541953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.542110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.542138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.542268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.542295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.542430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.542457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.542587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.542614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.542829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.542859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.543039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.543073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.543202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.543230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.543329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.543361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.543527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.543557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.543722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.543752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.543964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.543994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.544222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.544249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.544420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.544451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.544625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.544652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.544786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.544830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.544972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.545000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.545160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.545187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.545314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.545341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.545463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.545508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.545646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.545676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.545846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.545876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.546047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.546102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.546230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.546257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.546396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.546436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.546582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.546627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.546744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.546774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.546882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.546909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.547076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.547112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.547284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.547332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.547460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.547505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.547745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.547792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.547923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.547950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.548075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.548103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.548226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.548253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.548364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.868 [2024-07-25 19:07:30.548403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.868 qpair failed and we were unable to recover it. 00:34:18.868 [2024-07-25 19:07:30.548534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.548561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.548715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.548741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.548867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.548902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.549006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.549032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.549184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.549228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.549458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.549490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.549627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.549697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.549917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.549967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.550147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.550176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.550352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.550398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.550538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.550583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.550836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.550888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.551019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.551045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.551185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.551212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.551356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.551401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.551584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.551631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.551779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.551824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.551967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.551994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.552136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.552181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.552308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.552344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.552442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.552468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.552598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.552625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.552724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.552750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.552885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.552913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.553043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.553080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.553246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.553289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.553443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.553474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.553605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.553635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.553754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.553783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.553950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.553979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.554130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.554157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.554354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.554421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.554555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.554585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.554721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.554751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.554926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.554954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.555075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.555112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.555263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.555307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.869 [2024-07-25 19:07:30.555460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.869 [2024-07-25 19:07:30.555490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.869 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.555689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.555719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.555887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.555919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.556074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.556112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.556229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.556259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.556451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.556500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.556700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.556762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.556911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.556936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.557071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.557110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.557254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.557300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.557419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.557463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.557606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.557655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.557789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.557816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.557970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.557996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.558151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.558196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.558332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.558361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.558514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.558546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.558732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.558771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.558970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.559014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.559158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.559191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.559357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.559389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.559588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.559655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.559809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.559836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.559933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.559958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.560097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.560128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.560292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.560341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.560482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.560512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.560739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.560790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.560916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.560942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.561110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.561154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.561302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.561337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.561488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.561515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.561663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.561692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.561808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.561838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.561972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.562001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.562158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.562189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.562333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.562363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.562502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.562532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.562701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.562730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.562844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.870 [2024-07-25 19:07:30.562873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.870 qpair failed and we were unable to recover it. 00:34:18.870 [2024-07-25 19:07:30.563014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.563041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.563208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.563236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.563377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.563412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.563543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.563572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.563712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.563742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.563880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.563912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.564079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.564122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.564250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.564277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.564399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.564442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.564556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.564586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.564700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.564730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.564869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.564898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.565070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.565123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.565241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.565272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.565405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.565435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.565538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.565567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.565757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.565814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.565974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.566003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.566137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.566164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.566281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.566336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.566506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.566551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.566714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.566773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.566902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.566928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.567073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.567116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.567267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.567306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.567542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.567573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.567841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.567891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.568070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.568108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.568263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.568289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.568457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.568516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.568695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.568741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.568842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.568870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.568999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.569027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.569179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.569224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.569346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.871 [2024-07-25 19:07:30.569390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.871 qpair failed and we were unable to recover it. 00:34:18.871 [2024-07-25 19:07:30.569537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.569567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.569730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.569774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.569929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.569957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.570066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.570092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.570236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.570281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.570445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.570485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.570617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.570646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.570772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.570804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.570902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.570928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.571037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.571084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.571305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.571348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.571481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.571510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.571673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.571703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.571808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.571838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.571955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.571983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.572105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.572131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.572275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.572329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.572525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.572552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.572701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.572793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.572923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.572950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.573073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.573110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.573289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.573337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.573480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.573525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.573721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.573764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.573888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.573927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.574064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.574092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.574242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.574284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.574436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.574480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.574623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.574695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.574825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.574859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.574967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.574992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.575149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.575194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.575367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.575414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.575592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.575639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.575744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.575771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.575921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.575947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.576047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.576083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.576196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.576239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.576349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.576378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.576513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.576540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.872 qpair failed and we were unable to recover it. 00:34:18.872 [2024-07-25 19:07:30.576685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.872 [2024-07-25 19:07:30.576712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.576866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.576891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.577011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.577038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.577199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.577239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.577362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.577402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.577560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.577588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.577713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.577740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.577863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.577890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.577990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.578018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.578117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.578144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.578292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.578329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.578461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.578503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.578662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.578688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.578845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.578874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.579084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.579136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.579276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.579305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.579429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.579458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.579706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.579758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.579876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.579906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.580031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.580063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.580195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.580221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.580373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.580407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.580526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.580556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.580663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.580693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.580887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.580937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.581094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.581128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.581234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.581263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.581399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.581444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.581618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.581663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.581793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.581819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.581954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.581980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.582108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.582134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.582243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.582283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.582399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.582428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.582548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.582586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.582735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.582763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.582916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.582943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.583048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.583103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.583254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.583284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.583390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.583420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.583568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.583598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.583766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.583813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.583943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.583969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.584119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.873 [2024-07-25 19:07:30.584164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.873 qpair failed and we were unable to recover it. 00:34:18.873 [2024-07-25 19:07:30.584284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.584326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.584467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.584497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.584650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.584698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.584868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.584930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.585108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.585140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.585310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.585345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.585492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.585520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.585611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.585639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.585782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.585860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.586030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.586077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.586211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.586251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.586391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.586422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.586642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.586672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.586797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.586824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.586965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.586993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.587139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.587166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.587270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.587296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.587452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.587498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.587640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.587670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.587781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.587810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.587951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.587981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.588133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.588160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.588282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.588308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.588444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.588502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.588689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.588735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.588886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.588930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.589066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.589094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.589218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.589263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.589392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.589437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.589559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.589587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.589685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.589710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.589813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.589841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.589960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.589986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.590110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.590140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.590283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.590310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.590439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.590466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.590641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.590686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.590815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.590842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.590964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.590990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.591119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.591146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.591248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.591275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.591399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.591428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.591534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.591560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.874 [2024-07-25 19:07:30.591693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.874 [2024-07-25 19:07:30.591734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.874 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.591894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.591928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.592043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.592080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.592183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.592211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.592334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.592361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.592468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.592496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.592621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.592667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.592806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.592833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.592958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.592985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.593108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.593139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.593298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.593343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.593436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.593461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.593622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.593648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.593776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.593803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.593935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.593961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.594122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.594154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.594302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.594332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.594469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.594499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.594689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.594758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.594893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.594923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.595087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.595118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.595225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.595255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.595376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.595406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.595547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.595577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.595711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.595741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.595929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.595959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.596093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.596138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.596271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.596301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.596455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.596486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.596623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.596653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.596855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.596887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.597030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.597069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.597190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.597216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.597367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.597413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.597584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.597630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.597778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.597823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.597957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.597985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.598080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.598106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.875 qpair failed and we were unable to recover it. 00:34:18.875 [2024-07-25 19:07:30.598207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.875 [2024-07-25 19:07:30.598234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.598341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.598385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.598534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.598565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.598701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.598736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.598905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.598935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.599077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.599123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.599252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.599279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.599450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.599480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.599590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.599620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.599721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.599751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.599905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.599961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.600117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.600145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.600293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.600337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.600482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.600526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.600693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.600762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.600888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.600915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.601043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.601079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.601190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.601217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.601361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.601391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.601520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.601564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.601738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.601768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.601902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.601932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.602146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.602174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.602297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.602324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.602501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.602531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.602695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.602725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.602868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.602898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.603033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.603070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.603221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.603248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.603402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.603432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.603570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.603623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.603857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.603914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.604029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.604067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.604228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.604260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.604390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.604417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.604516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.604542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.604702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.604732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.604891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.604935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.605066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.605095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.605200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.605226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.605319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.605345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.605492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.605535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.605753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.605784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.605996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.876 [2024-07-25 19:07:30.606030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.876 qpair failed and we were unable to recover it. 00:34:18.876 [2024-07-25 19:07:30.606194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.606222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.606378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.606408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.606581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.606636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.606769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.606799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.606945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.606975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.607119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.607147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.607277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.607304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.607473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.607541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.607658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.607702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.607842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.607873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.608008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.608038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.608196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.608224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.608363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.608393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.608544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.608574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.608787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.608816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.608954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.608984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.609091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.609137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.609266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.609293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.609419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.609464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.609634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.609663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.609806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.609837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.609980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.610010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.610162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.610190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.610287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.610312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.610437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.610465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.610592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.610622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.610784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.610814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.610950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.610980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.611092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.611137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.611268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.611295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.611425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.611470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.611577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.611608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.611755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.611785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.611920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.611950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.612096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.612140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.612269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.612298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.612427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.612471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.612575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.612606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.612807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.612837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.612981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.613016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.613161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.613189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.613312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.613340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.613489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.613520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.877 [2024-07-25 19:07:30.613620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.877 [2024-07-25 19:07:30.613652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.877 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.613788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.613818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.613941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.613969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.614092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.614119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.614246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.614273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.614407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.614452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.614613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.614643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.614794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.614821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.614916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.614942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.615094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.615125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.615259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.615287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.615411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.615438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.615572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.615603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.615726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.615753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.615850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.615877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.616002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.616031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.616161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.616189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.616286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.616312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.616463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.616508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.616635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.616663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.616756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.616782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.616926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.616957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.617087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.617114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.617246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.617274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.617452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.617483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.617691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.617719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.617847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.617890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.618053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.618106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.618309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.618336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.618483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.618514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.618650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.618680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.618805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.618832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.619051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.619089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.619251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.619281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.619393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.619420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.619547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.619574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.619777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.619808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.619963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.619990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.620096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.620141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.620276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.620306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.620447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.620475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.620613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.620641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.620802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.620832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.620955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.620983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.621084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.621110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.878 qpair failed and we were unable to recover it. 00:34:18.878 [2024-07-25 19:07:30.621231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.878 [2024-07-25 19:07:30.621275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.621399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.621426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.621546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.621591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.621804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.621834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.621982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.622013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.622150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.622178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.622297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.622324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.622428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.622456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.622555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.622582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.622730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.622760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.622907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.622935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.623063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.623107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.623212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.623243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.623453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.623480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.623605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.623635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.623783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.623811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.623940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.623968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.624138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.624168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.624278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.624313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.624439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.624467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.624564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.624590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.624753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.624780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.624905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.624932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.625068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.625113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.625253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.625283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.625425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.625452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.625557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.625585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.625727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.625757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.625910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.625940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.626116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.626144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.626267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.626295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.626390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.626417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.626550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.626577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.626726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.626753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.626911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.626938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.627092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.627120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.627244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.627271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.627410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.627437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.627567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.627594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.627712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.627743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.627886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.879 [2024-07-25 19:07:30.627914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.879 qpair failed and we were unable to recover it. 00:34:18.879 [2024-07-25 19:07:30.628042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.628077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.628259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.628289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.628403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.628430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.628561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.628588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.628749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.628779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.628947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.628974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.629085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.629111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.629301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.629328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.629450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.629477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.629603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.629630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.629807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.629834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.629991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.630022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.630231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.630330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.630357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.630561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.630588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.630745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.630775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.630948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.630975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.631075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.631106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.631238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.631266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.631377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.631405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.631512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.631538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.631663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.631690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.631838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.631868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.632021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.632048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.632182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.632210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.632367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.632397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.632544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.632571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.632685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.880 [2024-07-25 19:07:30.632712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.880 qpair failed and we were unable to recover it. 00:34:18.880 [2024-07-25 19:07:30.632834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.632861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.633005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.633035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.633211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.633238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.633386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.633416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.633563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.633591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.633699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.633727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.633851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.633879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.633978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.634005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.634125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.634153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.634374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.634404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.634560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.634587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.634739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.634766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.634934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.634964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.635078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.635106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.635201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.635228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.635331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.635375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.635523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.635551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.635673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.635700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.635812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.635856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.635996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.636026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.636171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.636199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.636322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.636365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.636517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.636544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.636635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.636662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.636844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.636873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.637024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.637051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.637163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.637190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.637341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.637368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.637543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.637570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.637738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.637775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.637916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.637946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.638103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.638130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.638260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.638303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.638444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.638474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.638646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.638673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.638798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.638842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.639007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.881 [2024-07-25 19:07:30.639037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.881 qpair failed and we were unable to recover it. 00:34:18.881 [2024-07-25 19:07:30.639212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.639239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.639409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.639440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.639547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.639577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.639787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.639814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.639977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.640007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.640149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.640177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.640309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.640337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.640467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.640494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.640732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.640762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.640907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.640938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.641042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.641074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.641256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.641283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.641438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.641464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.641571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.641614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.641778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.641808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.641936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.641963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.642102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.642147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.642288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.642318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.642460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.642487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.642645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.642672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.642815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.642845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.642952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.642979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.643104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.643132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.643340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.643369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.643514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.643540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.643693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.643737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.643945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.643972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.644102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.644131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.644232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.644259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.644475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.644505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.644681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.644708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.644830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.644874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.645016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.645049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.645217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.645245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.645382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.645410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.645615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.645642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.645771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.645801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.882 [2024-07-25 19:07:30.645912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.882 [2024-07-25 19:07:30.645939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.882 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.646069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.646097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.646222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.646249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.646426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.646456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.646621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.646651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.646822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.646849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.646977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.647023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.647199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.647227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.647337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.647364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.647493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.647520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.647689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.647719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.647890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.647920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.648065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.648111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.648232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.648259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.648392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.648419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.648588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.648618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.648727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.648757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.648909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.648939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.649124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.649151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.649256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.649289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.649466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.649493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.649592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.649619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.649784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.649814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.650015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.650045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.650207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.650234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.650441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.650506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.650656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.650683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.650840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.650882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.651034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.651066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.651224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.651254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.651380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.651408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.651557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.651585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.651732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.651759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.651905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.651935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.652114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.652142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.652267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.652298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.652475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.652515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.652694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.883 [2024-07-25 19:07:30.652762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.883 qpair failed and we were unable to recover it. 00:34:18.883 [2024-07-25 19:07:30.652915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.652942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.653071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.653099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.653274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.653303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.653442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.653469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.653594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.653621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.653773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.653803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.653981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.654008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.654217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.654247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.654380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.654410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.654566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.654593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.654721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.654747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.654869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.654901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.655066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.655094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.655187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.655214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.655335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.655362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.655519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.655545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.655681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.655721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.655874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.655903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.656051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.656087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.656228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.656255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.656490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.656541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.656687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.656714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.656812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.656839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.657000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.657027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.657226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.657253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.657394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.657421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.657636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.657683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.657833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.657861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.657999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.658042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.884 qpair failed and we were unable to recover it. 00:34:18.884 [2024-07-25 19:07:30.658188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.884 [2024-07-25 19:07:30.658219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.658375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.658402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.658524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.658568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.658736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.658765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.658914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.658942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.659080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.659117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.659270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.659327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.659502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.659529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.659699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.659734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.659903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.659931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.660083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.660120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.660289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.660328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.660466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.660497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.660647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.660674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.660812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.660857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.661004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.661036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.661230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.661256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.661391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.661417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.661566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.661593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.661724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.661751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.661879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.661922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.662095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.662126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.662284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.662322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.662451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.662479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.662588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.662618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.662798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.662825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.662979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.663009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.663175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.663203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.663304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.663336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.663489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.663515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.663656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.663721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.663902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.663929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.664028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.664079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.664234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.664261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.664413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.664439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.664583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.664612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.664738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.664767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.664882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.885 [2024-07-25 19:07:30.664908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.885 qpair failed and we were unable to recover it. 00:34:18.885 [2024-07-25 19:07:30.665029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.665055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.665207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.665236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.665364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.665390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.665478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.665504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.665646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.665675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.665847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.665874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.666027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.666053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.666191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.666218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.666320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.666346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.666465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.666491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.666617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.666647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.666778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.666804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.666924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.666967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.667095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.667148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.667279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.667305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.667409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.667435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.667553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.667582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.667701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.667726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.667843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.667869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.668037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.668074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.668245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.668271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.668367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.668408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.668508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.668537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.668691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.668717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.668857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.668904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.669039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.669075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.669199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.669225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.669361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.669387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.669539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.669567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.669714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.669740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.669864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.669890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.670051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.670086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.670230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.670257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.670353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.670379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.670505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.670531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.670622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.670649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.670770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.886 [2024-07-25 19:07:30.670797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.886 qpair failed and we were unable to recover it. 00:34:18.886 [2024-07-25 19:07:30.670918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.670959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.671088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.671117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.671259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.671286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.671440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.671467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.671595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.671622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.671717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.671744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.671894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.671920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.672077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.672105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.672232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.672260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.672366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.672393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.672550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.672576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.672745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.672775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.672922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.672953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.673083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.673114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.673274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.673300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.673442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.673509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.673683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.673709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.673825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.673868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.673980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.674012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.674132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.674160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.674260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.674286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.674415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.674442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.674575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.674602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.674702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.674728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.674900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.674928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.675084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.675111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.675242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.675269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.675400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.675426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.675548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.675574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.675694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.675720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.675878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.675907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.676050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.676083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.676177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.676204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.676321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.676364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.676463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.676489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.676640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.676666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.676780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.676809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.887 [2024-07-25 19:07:30.676950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.887 [2024-07-25 19:07:30.676976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.887 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.677069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.677096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.677233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.677261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.677421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.677447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.677543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.677568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.677734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.677763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.677910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.677940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.678120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.678148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.678274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.678300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.678451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.678478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.678574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.678617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.678768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.678796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.678942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.678969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.679141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.679171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.679314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.679344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.679484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.679510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.679614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.679646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.679776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.679803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.680006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.680034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.680166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.680194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.680287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.680315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.680410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.680437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.680556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.680583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.680753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.680780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.680932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.680961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.681110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.681137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.681285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.681312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.681471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.681498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.681623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.681665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.681831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.681861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.682016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.682044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.682180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.682223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.682337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.682369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.682516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.682542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.682672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.682698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.682841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.682871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:18.888 [2024-07-25 19:07:30.682999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.888 [2024-07-25 19:07:30.683025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:18.888 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.683159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.683187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.683314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.683341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.683462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.683489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.683622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.683665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.683775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.683818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.683945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.683972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.684134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.684179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.684357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.684384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.684477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.684503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.684617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.684644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.684814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.684844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.685067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.685112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.685263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.685290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.685430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.685460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.685609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.685636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.685766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.685793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.685933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.685965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.686110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.686137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.686264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.686291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.686440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.686528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.686706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.686733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.686860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.686904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.687047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.687100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.687207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.687234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.687362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.687389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.687598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.687628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.687751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.687778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.176 [2024-07-25 19:07:30.687906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.176 [2024-07-25 19:07:30.687933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.176 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.688083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.688113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.688256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.688283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.688410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.688438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.688623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.688676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.688819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.688845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.688977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.689004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.689177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.689206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.689341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.689368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.689497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.689523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.689646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.689673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.689863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.689889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.690055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.690092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.690226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.690254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.690386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.690412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.690510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.690536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.690628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.690654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.690779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.690805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.690972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.691001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.691156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.691185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.691307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.691334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.691543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.691573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.691710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.691740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.691861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.691888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.692043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.692079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.692266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.692296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.692407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.692434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.692589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.692616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.692793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.692823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.692973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.693000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.693151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.693179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.693350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.693380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.693524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.693556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.693693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.693720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.693841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.693868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.694040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.694072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.694252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.694281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.177 qpair failed and we were unable to recover it. 00:34:19.177 [2024-07-25 19:07:30.694452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.177 [2024-07-25 19:07:30.694481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.694658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.694684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.694854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.694884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.694999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.695028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.695189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.695216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.695375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.695419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.695672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.695722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.695876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.695903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.696026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.696052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.696206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.696236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.696379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.696406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.696559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.696586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.696749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.696776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.696927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.696957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.697136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.697164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.697292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.697320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.697474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.697501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.697622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.697649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.697781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.697809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.697933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.697960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.698084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.698111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.698245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.698272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.698469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.698496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.698596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.698639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.698744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.698773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.698922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.698949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.699100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.699126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.699266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.699295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.699469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.699496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.699616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.699658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.699805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.699833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.699984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.700011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.700132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.700159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.700284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.700311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.700493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.700519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.700666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.700697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.700845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.700875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.178 [2024-07-25 19:07:30.701025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.178 [2024-07-25 19:07:30.701052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.178 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.701202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.701247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.701416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.701445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.701573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.701600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.701725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.701752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.701875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.701903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.702022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.702053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.702243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.702270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.702463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.702516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.702663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.702691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.702858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.702888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.703039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.703079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.703214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.703241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.703367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.703394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.703565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.703591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.703690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.703716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.703867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.703893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.704039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.704090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.704192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.704219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.704346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.704372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.704495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.704524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.704672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.704698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.704870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.704899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.705035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.705070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.705194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.705220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.705388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.705432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.705677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.705727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.705853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.705880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.706034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.706067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.706185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.706214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.706393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.706419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.706534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.706560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.706658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.706684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.706776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.706802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.706928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.706954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.707160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.707201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.707337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.707366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.179 [2024-07-25 19:07:30.707549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.179 [2024-07-25 19:07:30.707580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.179 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.707698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.707734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.707916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.707944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.708085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.708115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.708250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.708281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.708457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.708484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.708606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.708650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.708829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.708856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.708956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.708983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.709109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.709137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.709304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.709333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.709477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.709504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.709631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.709659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.709815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.709845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.709988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.710015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.710153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.710180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.710314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.710342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.710497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.710523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.710624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.710667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.710773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.710802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.710918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.710945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.711096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.711122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.711279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.711305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.711463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.711490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.711586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.711628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.711794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.711823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.711984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.712013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.712166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.712193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.712319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.712348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.712452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.712480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.180 [2024-07-25 19:07:30.712610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.180 [2024-07-25 19:07:30.712637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.180 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.712759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.712789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.712950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.712976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.713097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.713125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.713258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.713284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.713437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.713463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.713603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.713632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.713811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.713839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.713966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.713993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.714090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.714118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.714222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.714250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.714345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.714372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.714504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.714530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.714694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.714722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.714874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.714901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.714999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.715026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.715188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.715216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.715382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.715409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.715563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.715590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.715735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.715764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.715930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.715959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.716112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.181 [2024-07-25 19:07:30.716139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.181 qpair failed and we were unable to recover it. 00:34:19.181 [2024-07-25 19:07:30.716260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.716286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.716388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.716414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.716542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.716568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.716737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.716763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.716892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.716918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.717019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.717046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.717226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.717284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.717438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.717466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.717572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.717600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.717722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.717750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.717850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.717876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.718029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.718080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.718234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.718260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.718416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.718443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.718549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.718594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.718735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.718764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.718938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.718970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.719114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.719145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.719308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.719337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.719510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.719537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.719644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.719671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.719828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.719859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.719998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.720025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.720163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.720191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.720313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.720356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.720509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.720537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.720711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.720741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.720913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.720940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.721086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.721130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.721224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.721250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.721405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.721434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.721608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.721635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.721754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.721798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.721924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.721953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.722100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.722127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.722257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.722283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.722435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.182 [2024-07-25 19:07:30.722476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.182 qpair failed and we were unable to recover it. 00:34:19.182 [2024-07-25 19:07:30.722646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.722672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.722804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.722830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.722934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.722962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.723089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.723117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.723237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.723280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.723452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.723479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.723610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.723637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.723787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.723814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.723932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.723975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.724103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.724130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.724251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.724277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.724399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.724426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.724551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.724578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.724681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.724708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.724858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.724888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.725028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.725054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.725158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.725186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.725334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.725363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.725535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.725562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.725732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.725766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.725888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.725918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.726091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.726119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.726245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.726289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.726436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.726463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.726615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.726642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.726811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.726840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.726955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.726985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.727127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.727154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.727250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.727277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.727392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.727423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.727546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.727572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.727699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.727726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.727874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.727903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.728032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.728065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.728197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.728223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.728381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.728408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.728528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.728555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.183 [2024-07-25 19:07:30.728733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.183 [2024-07-25 19:07:30.728762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.183 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.728902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.728931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.729044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.729078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.729173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.729200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.729346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.729375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.729519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.729545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.729665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.729691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.729811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.729844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.729970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.729997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.730154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.730181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.730336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.730366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.730484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.730511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.730664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.730691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.730880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.730910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.731036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.731068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.731190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.731217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.731359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.731390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.731535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.731562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.731684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.731711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.731859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.731886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.732042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.732077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.732246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.732275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.732471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.732535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.732683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.732710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.732839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.732866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.732974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.733003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.733131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.733158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.733261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.733288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.733482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.733508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.733628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.733655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.733757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.733783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.733936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.733965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.734121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.734148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.734273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.734300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.734457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.734486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.734624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.734650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.734760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.184 [2024-07-25 19:07:30.734786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.184 qpair failed and we were unable to recover it. 00:34:19.184 [2024-07-25 19:07:30.734941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.734968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.735092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.735119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.735218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.735244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.735406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.735432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.735588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.735614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.735743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.735770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.735896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.735922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.736077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.736105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.736228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.736255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.736420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.736446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.736601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.736628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.736806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.736834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.737022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.737048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.737209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.737235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.737361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.737388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.737541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.737584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.737768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.737794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.737922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.737949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.738085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.738112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.738238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.738264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.738432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.738461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.738618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.738676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.738824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.738850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.738976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.739002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.739204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.739231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.739352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.739383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.739505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.739547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.739725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.739751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.739881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.739906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.739998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.740025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.740238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.740278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.740406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.740434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.740556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.740600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.740729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.740759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.740934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.740961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.741091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.741119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.185 qpair failed and we were unable to recover it. 00:34:19.185 [2024-07-25 19:07:30.741211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.185 [2024-07-25 19:07:30.741238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.741396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.741423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.741544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.741571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.741731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.741775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.741939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.741969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.742100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.742128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.742283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.742310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.742436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.742464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.742563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.742590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.742716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.742743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.742892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.742919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.743069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.743100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.743235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.743265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.743413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.743440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.743594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.743637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.743789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.743816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.743975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.744002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.744148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.744179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.744310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.744340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.744457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.744484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.744604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.744632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.744790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.744819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.744969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.744996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.745121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.745148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.745296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.745323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.745449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.745476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.745640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.745669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.745810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.745842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.745995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.746021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.746122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.746153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.746277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.746304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.746434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.746460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.746590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.186 [2024-07-25 19:07:30.746617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.186 qpair failed and we were unable to recover it. 00:34:19.186 [2024-07-25 19:07:30.746722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.746748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.746912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.746941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.747068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.747095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.747219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.747246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.747397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.747423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.747594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.747623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.747760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.747792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.747914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.747941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.748038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.748073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.748259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.748286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.748497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.748525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.748729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.748758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.748894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.748925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.749079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.749106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.749210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.749238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.749390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.749417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.749543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.749570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.749699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.749726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.749877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.749907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.750088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.750115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.750256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.750285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.750438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.750465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.750596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.750623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.750730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.750758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.750882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.750909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.751070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.751097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.751263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.751293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.751515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.751541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.751670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.751696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.751820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.751863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.751974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.752005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.752139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.752165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.752272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.752299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.752399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.752443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.752594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.752621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.752710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.752736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.752874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.752908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.187 [2024-07-25 19:07:30.753048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.187 [2024-07-25 19:07:30.753083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.187 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.753233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.753259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.753371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.753401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.753555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.753581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.753701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.753727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.753871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.753903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.754117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.754144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.754308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.754338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.754551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.754608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.754752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.754779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.754876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.754902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.755030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.755057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.755195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.755222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.755358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.755385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.755500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.755529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.755680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.755707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.755832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.755876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.756049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.756089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.756219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.756245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.756376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.756421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.756559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.756589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.756763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.756790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.756880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.756906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.757023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.757050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.757214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.757241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.757359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.757385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.757515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.757542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.757672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.757699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.757833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.757860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.758008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.758037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.758163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.758191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.758341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.758384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.758639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.758696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.758857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.758884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.759005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.759046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.759193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.759222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.759396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.759423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.759546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.188 [2024-07-25 19:07:30.759573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.188 qpair failed and we were unable to recover it. 00:34:19.188 [2024-07-25 19:07:30.759702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.759730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.759903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.759937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.760039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.760072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.760229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.760256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.760443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.760470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.760588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.760632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.760798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.760827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.760945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.760972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.761127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.761155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.761333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.761362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.761509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.761537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.761709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.761739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.761848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.761892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.762023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.762049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.762212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.762238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.762418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.762470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.762643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.762669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.762811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.762841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.762983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.763012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.763170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.763198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.763294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.763321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.763453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.763479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.763609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.763635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.763755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.763797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.763929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.763958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.764096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.764123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.764248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.764275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.764401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.764427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.764555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.764582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.764706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.764732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.764852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.764880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.765007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.765034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.765190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.765218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.765341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.765368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.765497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.765524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.765620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.765647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.765787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.765814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.189 [2024-07-25 19:07:30.765954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.189 [2024-07-25 19:07:30.765994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.189 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.766125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.766154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.766318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.766345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.766473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.766518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.766646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.766678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.766832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.766861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.766954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.766981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.767106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.767134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.767260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.767303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.767533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.767584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.767739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.767769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.767935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.767964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.768092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.768119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.768269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.768295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.768456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.768513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.768776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.768827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.768965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.768995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.769128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.769158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.769306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.769351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.769527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.769575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.769746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.769792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.769922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.769950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.770164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.770209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.770387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.770431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.770581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.770625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.770751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.770778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.770903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.770930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.771021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.771048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.771187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.771218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.771382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.771424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.771552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.771578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.771721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.771761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.771891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.771919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.190 qpair failed and we were unable to recover it. 00:34:19.190 [2024-07-25 19:07:30.772074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.190 [2024-07-25 19:07:30.772118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.772245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.772273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.772441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.772472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.772726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.772778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.772890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.772917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.773044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.773080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.773205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.773249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.773392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.773422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.773695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.773750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.773869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.773896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.774048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.774083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.774228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.774277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.774450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.774494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.774633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.774677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.774783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.774810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.774940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.774966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.775095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.775123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.775276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.775320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.775469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.775499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.775641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.775671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.775880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.775938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.776101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.776128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.776254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.776280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.776412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.776441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.776582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.776611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.776755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.776786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.776963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.776991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.777119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.777147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.777264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.777294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.777463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.777490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.777642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.777669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.777797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.777824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.777933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.777960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.778065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.778109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.778253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.778280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.778523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.778576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.778717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.778746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.778856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.191 [2024-07-25 19:07:30.778886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.191 qpair failed and we were unable to recover it. 00:34:19.191 [2024-07-25 19:07:30.779003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.779033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.779130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.779157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.779282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.779308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.779495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.779524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.779641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.779670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.779830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.779860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.780004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.780030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.780155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.780182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.780311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.780355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.780515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.780543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.780636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.780665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.780776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.780805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.780948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.780976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.781109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.781136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.781310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.781349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.781535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.781566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.781707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.781737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.781875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.781906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.782023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.782051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.782167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.782195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.782349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.782376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.782595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.782625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.782731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.782761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.782865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.782895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.783042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.783074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.783200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.783227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.783371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.783400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.783562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.783596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.783732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.783761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.783929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.783960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.784082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.784109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.784213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.784239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.784384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.784415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.784575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.784605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.784758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.784785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.784942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.784972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.785142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.785169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.192 [2024-07-25 19:07:30.785290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.192 [2024-07-25 19:07:30.785316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.192 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.785470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.785501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.785695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.785738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.785893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.785922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.786076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.786106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.786223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.786250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.786405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.786431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.786585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.786614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.786771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.786800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.786938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.786968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.787150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.787176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.787305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.787331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.787449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.787475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.787605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.787632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.787801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.787830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.787939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.787968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.788132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.788173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.788314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.788354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.788542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.788587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.788768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.788812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.788937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.788963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.789067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.789094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.789224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.789251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.789423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.789467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.789608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.789654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.789896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.789925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.790071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.790099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.790249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.790276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.790438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.790508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.790767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.790819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.790952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.790981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.791127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.791154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.791257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.791283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.791424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.791453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.791669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.791731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.791868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.791897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.792035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.792070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.792213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.792239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.193 qpair failed and we were unable to recover it. 00:34:19.193 [2024-07-25 19:07:30.792376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.193 [2024-07-25 19:07:30.792419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.792559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.792588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.792746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.792775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.792876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.792904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.793072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.793099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.793223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.793249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.793369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.793411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.793525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.793554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.793719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.793748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.793862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.793891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.794068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.794111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.794205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.794232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.794390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.794433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.794546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.794587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.794832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.794898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.795055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.795088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.795218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.795244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.795396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.795422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.795599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.795628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.795803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.795832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.795939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.795969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.796074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.796115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.796251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.796280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.796392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.796422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.796560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.796590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.796719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.796749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.796875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.796904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.797051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.797093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.797220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.797246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.797404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.797459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.797660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.797720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.797860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.797904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.798013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.798043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.798242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.798288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.798411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.798456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.798630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.798687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.798781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.798808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.798930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.194 [2024-07-25 19:07:30.798957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.194 qpair failed and we were unable to recover it. 00:34:19.194 [2024-07-25 19:07:30.799098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.799129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.799232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.799261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.799371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.799400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.799512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.799541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.799760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.799816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.799913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.799942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.800084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.800111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.800249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.800290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.800501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.800532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.800739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.800770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.800934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.800964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.801092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.801119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.801255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.801298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.801450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.801479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.801593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.801624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.801761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.801791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.801923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.801953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.802086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.802113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.802273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.802299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.802458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.802487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.802634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.802663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.802771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.802801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.802965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.802994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.803146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.803173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.803267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.803294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.803425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.803452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.803594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.803623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.803786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.803815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.803949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.803978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.804124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.804151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.804299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.804325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.804443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.804472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.195 qpair failed and we were unable to recover it. 00:34:19.195 [2024-07-25 19:07:30.804614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.195 [2024-07-25 19:07:30.804643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.804757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.804785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.804924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.804953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.805128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.805155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.805258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.805298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.805438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.805483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.805623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.805667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.805795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.805839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.805978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.806018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.806141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.806172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.806326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.806354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.806507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.806567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.806705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.806735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.806883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.806926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.807097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.807124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.807250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.807276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.807419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.807449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.807614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.807643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.807781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.807810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.807918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.807951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.808137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.808165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.808318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.808362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.808501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.808531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.808671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.808701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.808830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.808872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.809010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.809041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.809224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.809250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.809421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.809450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.809592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.809619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.809746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.809776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.809880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.809909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.810041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.810110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.810236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.810264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.810391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.810432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.810574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.810603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.810741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.810771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.810934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.810963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.811113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.196 [2024-07-25 19:07:30.811142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.196 qpair failed and we were unable to recover it. 00:34:19.196 [2024-07-25 19:07:30.811294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.811322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.811415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.811459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.811595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.811625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.811797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.811827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.811937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.811967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.812133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.812160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.812315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.812346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.812512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.812542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.812709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.812773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.812971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.813028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.813194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.813221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.813320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.813366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.813511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.813537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.813652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.813681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.813794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.813823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.813955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.813997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.814132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.814159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.814287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.814313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.814432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.814461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.814621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.814650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.814793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.814825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.815013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.815053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.815174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.815203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.815339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.815367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.815613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.815670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.815851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.815895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.816017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.816044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.816169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.816195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.816301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.816328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.816498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.816527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.816634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.816663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.816791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.816833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.816995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.817021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.817188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.817215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.817308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.817335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.817455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.817484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.817657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.817686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.817799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.197 [2024-07-25 19:07:30.817828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.197 qpair failed and we were unable to recover it. 00:34:19.197 [2024-07-25 19:07:30.817946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.817972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.818089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.818116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.818244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.818271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.818359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.818385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.818488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.818515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.818612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.818638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.818806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.818835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.818937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.818967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.819119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.819147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.819246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.819275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.819439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.819484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.819629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.819661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.819783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.819810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.819975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.820003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.820129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.820157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.820317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.820360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.820518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.820548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.820690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.820719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.820816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.820845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.820991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.821018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.821150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.821177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.821291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.821321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.822146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.822178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.822359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.822389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.822529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.822558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.822728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.822757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.822879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.822905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.823046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.823080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.823215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.823243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.823396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.823423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.823636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.823686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.823832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.823862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.824010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.824036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.824177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.824217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.824378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.824409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.824575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.824606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.824718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.824748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.198 [2024-07-25 19:07:30.824929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.198 [2024-07-25 19:07:30.824990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.198 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.825131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.825160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.825332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.825363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.825569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.825619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.825728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.825773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.825899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.825927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.826065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.826093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.826227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.826254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.826402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.826432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.826543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.826572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.826712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.826742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.826881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.826910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.827023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.827049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.827197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.827224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.827322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.827349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.827472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.827501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.827607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.827635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.827764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.827793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.827914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.827940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.828069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.828096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.828225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.828252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.828356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.828382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.828503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.828530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.828631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.828658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.828775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.828817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.828965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.828991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.829115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.829160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.829332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.829390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.829514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.829559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.829693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.829723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.829841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.829872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.830093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.830133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.830272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.830299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.830446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.830476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.830612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.830642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.199 [2024-07-25 19:07:30.830803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.199 [2024-07-25 19:07:30.830832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.199 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.830987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.831014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.831149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.831176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.831304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.831331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.831444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.831473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.831586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.831615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.831711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.831740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.831906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.831935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.832052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.832083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.832181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.832208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.832324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.832382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.832556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.832587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.832731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.832760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.832894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.832935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.833100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.833126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.833245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.833278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.833383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.833414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.833622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.833686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.833867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.833903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.834036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.834069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.834195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.834222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.834340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.834369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.834537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.834590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.834709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.834752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.834913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.834943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.835055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.835090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.835232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.835258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.835380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.835409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.835540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.835569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.835702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.835732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.835843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.835872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.836001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.836031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.836186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.836226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.836352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.836386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.836528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.836574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.836724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.200 [2024-07-25 19:07:30.836770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.200 qpair failed and we were unable to recover it. 00:34:19.200 [2024-07-25 19:07:30.836892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.836919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.837046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.837082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.837203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.837249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.837395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.837440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.837579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.837625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.837753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.837781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.837885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.837911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.838039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.838072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.838220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.838250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.838363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.838397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.838515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.838544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.838653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.838682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.838788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.838818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.838935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.838964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.839107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.839134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.839255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.839281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.839404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.839433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.839565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.839594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.839733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.839761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.839872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.839901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.840031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.840073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.840193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.840220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.840339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.840385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.840553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.840583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.840714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.840743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.840852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.840881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.841026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.841068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.841200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.841227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.841341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.841373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.841524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.841553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.841688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.841716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.841843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.841872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.841989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.842015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.842167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.842207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.842339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.842377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.842518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.842548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.842725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.842797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.201 qpair failed and we were unable to recover it. 00:34:19.201 [2024-07-25 19:07:30.842965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.201 [2024-07-25 19:07:30.842994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.843142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.843169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.843322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.843375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.843523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.843572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.843790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.843843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.843993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.844023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.844170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.844197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.844292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.844318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.844431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.844461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.844671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.844722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.844838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.844867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.844996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.845025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.845160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.845186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.845317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.845344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.845446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.845472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.845595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.845624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.845738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.845764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.845944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.845974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.846114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.846141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.846267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.846294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.846422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.846451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.846560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.846589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.846720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.846750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.846889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.846917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.847020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.847073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.847223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.847262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.847461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.847511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.847633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.847666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.847815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.847846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.847959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.847986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.848101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.848129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.848227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.848254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.848363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.848392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.848554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.848583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.848683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.848712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.848820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.848849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.848975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.849000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.202 qpair failed and we were unable to recover it. 00:34:19.202 [2024-07-25 19:07:30.849132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.202 [2024-07-25 19:07:30.849158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.849288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.849314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.849487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.849516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.849709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.849777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.849910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.849939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.850090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.850133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.850237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.850267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.850371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.850398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.850500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.850527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.850639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.850668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.850882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.850912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.851066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.851112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.851266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.851293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.851462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.851488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.851638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.851668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.851800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.851830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.851997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.852026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.852170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.852196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.852317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.852361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.852473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.852500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.852626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.852655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.852813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.852842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.852965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.852990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.853131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.853157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.853307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.853332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.853469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.853495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.853617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.853646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.853770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.853814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.853968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.853999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.854112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.854156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.854258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.854285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.854437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.854472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.854597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.854626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.854790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.854819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.854928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.854957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.855051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.855087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.855198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.855224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.855368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.203 [2024-07-25 19:07:30.855397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.203 qpair failed and we were unable to recover it. 00:34:19.203 [2024-07-25 19:07:30.855527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.855556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.855658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.855687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.855866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.855923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.856079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.856109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.856261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.856307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.856461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.856492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.856692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.856736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.856886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.856912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.857012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.857040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.857165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.857195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.857308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.857341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.857511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.857542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.857654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.857684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.857853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.857883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.858064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.858092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.858195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.858221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.858368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.858411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.858579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.858609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.858778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.858822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.858927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.858955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.859088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.859115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.859244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.859270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.859410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.859440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.859564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.859593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.859732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.859761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.859885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.859913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.860021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.860069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.860188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.860217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.860360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.860403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.860549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.860599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.860764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.860808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.860905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.860932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.204 [2024-07-25 19:07:30.861053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.204 [2024-07-25 19:07:30.861101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.204 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.861246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.861275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.861390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.861419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.861582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.861630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.861740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.861770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.861896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.861925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.862028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.862069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.862169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.862195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.862339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.862390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.862607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.862633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.862765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.862791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.862919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.862945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.863035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.863072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.863174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.863201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.863301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.863327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.863438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.863465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.863570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.863596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.863697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.863725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.863854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.863880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.863977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.864003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.864134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.864161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.864260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.864286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.864375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.864401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.864494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.864520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.864644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.864670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.864798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.864825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.864979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.865007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.865151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.865189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.865346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.865375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.865570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.865615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.865823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.865867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.865973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.866001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.866163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.866209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.866348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.866401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.866538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.866582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.205 qpair failed and we were unable to recover it. 00:34:19.205 [2024-07-25 19:07:30.866681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.205 [2024-07-25 19:07:30.866708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.866835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.866862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.866991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.867019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.867149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.867180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.867323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.867353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.867505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.867531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.867687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.867716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.867831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.867857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.868004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.868030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.868185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.868215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.868323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.868353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.868485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.868514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.868649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.868681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.868832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.868859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.868960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.868986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.869129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.869174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.869322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.869351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.869493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.869522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.869709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.869752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.869846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.869878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.869980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.870007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.870131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.870158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.870282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.870309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.870450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.870476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.870593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.870619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.870719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.870746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.870868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.870894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.871024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.871052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.871239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.871284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.871435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.871478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.871596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.871626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.871769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.871797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.872002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.872029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.206 [2024-07-25 19:07:30.872183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.206 [2024-07-25 19:07:30.872227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.206 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.872367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.872411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.872556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.872600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.872722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.872748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.872902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.872928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.873066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.873093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.873189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.873216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.873340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.873367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.873498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.873525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.873626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.873655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.873754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.873781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.873880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.873907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.874003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.874030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.874152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.874183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.874307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.874333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.874463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.874492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.874660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.874699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.874845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.874890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.875057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.875097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.875200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.875226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.875334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.875361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.875495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.875524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.875650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.875679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.875793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.875823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.875999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.876027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.876164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.876191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.876398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.876424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.876659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.876709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.876888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.876933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.877027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.877053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.877159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.877186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.877304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.877348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.877507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.877551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.877701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.877732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.877874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.877903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.207 [2024-07-25 19:07:30.878063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.207 [2024-07-25 19:07:30.878090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.207 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.878246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.878272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.878412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.878438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.878556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.878599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.878711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.878740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.878876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.878914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.879056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.879109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.879260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.879286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.879407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.879433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.879547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.879573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.879695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.879726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.879869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.879896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.880051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.880088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.880294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.880321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.880465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.880508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.880638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.880664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.880755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.880782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.880984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.881011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.881151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.881178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.881310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.881336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.881461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.881488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.881589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.881615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.881770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.881799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.881925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.881951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.882082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.882108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.882213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.882240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.882340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.882369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.882465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.882492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.882608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.882653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.882792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.882838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.882930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.882957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.883085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.883112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.883215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.883242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.883365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.883392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.883518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.883545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.883676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.208 [2024-07-25 19:07:30.883703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.208 qpair failed and we were unable to recover it. 00:34:19.208 [2024-07-25 19:07:30.883824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.883851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.883986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.884013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.884133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.884163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.884281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.884310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.884453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.884479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.884588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.884617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.884738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.884767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.884915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.884942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.885040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.885072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.885191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.885236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.885369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.885412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.885537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.885564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.885689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.885733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.885889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.885916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.886049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.886081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.886216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.886242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.886376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.886402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.886492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.886519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.886649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.886675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.886796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.886825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.886940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.886967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.887096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.887126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.887239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.887269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.887420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.887450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.887571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.887601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.887763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.887792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.887907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.887936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.888048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.888084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.209 [2024-07-25 19:07:30.888225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.209 [2024-07-25 19:07:30.888251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.209 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.888339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.888365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.888462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.888488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.888608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.888638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.888768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.888813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.888923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.888952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.889115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.889142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.889270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.889297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.889411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.889440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.889602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.889631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.889768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.889798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.889904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.889933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.890046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.890080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.890184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.890211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.890369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.890395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.890535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.890564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.890674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.890703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.890839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.890869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.891026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.891082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.891188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.891215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.891361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.891407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.891560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.891605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.891744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.891779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.891924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.891962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.892106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.892138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.892277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.892307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.892415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.892444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.892579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.892608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.892707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.892736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.892845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.892887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.893038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.893072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.893224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.893253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.893382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.893408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.893560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.893605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.893780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.893826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.893950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.893977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.894088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.894135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.894251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.894280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.894424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.894454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.210 [2024-07-25 19:07:30.894562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.210 [2024-07-25 19:07:30.894592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.210 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.894704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.894733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.894849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.894875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.895004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.895032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.895183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.895229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.895368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.895412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.895570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.895597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.895691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.895718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.895827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.895853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.895987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.896015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.896142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.896176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.896294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.896323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.896455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.896485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.896612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.896642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.896757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.896786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.896928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.896955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.897055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.897088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.897201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.897228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.897347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.897376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.897512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.897542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.897677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.897706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.897814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.897842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.897950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.897980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.898111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.898155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.898317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.898345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.898489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.898533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.898679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.898723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.898895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.898941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.899036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.899076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.899225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.899270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.899416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.899461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.899614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.899660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.899788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.899816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.899942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.899969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.900080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.900107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.900209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.900236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.900353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.900382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.900498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.900531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.900644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.211 [2024-07-25 19:07:30.900673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.211 qpair failed and we were unable to recover it. 00:34:19.211 [2024-07-25 19:07:30.900838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.900882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.901020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.901047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.901163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.901189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.901341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.901386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.901533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.901577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.901693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.901722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.901868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.901896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.902023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.902066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.902183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.902212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.902309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.902338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.902515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.902544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.902676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.902705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.902903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.902948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.903068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.903095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.903235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.903281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.903429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.903458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.903632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.903675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.903800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.903842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.903948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.903976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.904104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.904132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.904260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.904287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.904433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.904462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.904603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.904632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.904773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.904802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.904951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.904979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.905113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.905149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.905298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.905343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.905490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.905519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.905640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.905666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.905787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.905814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.905924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.905952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.906084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.906111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.906243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.906270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.906417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.906446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.906561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.906589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.906718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.906747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.906917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.906946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.907056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.907112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.907200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.212 [2024-07-25 19:07:30.907226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.212 qpair failed and we were unable to recover it. 00:34:19.212 [2024-07-25 19:07:30.907364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.907393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.907493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.907522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.907693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.907723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.907862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.907893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.908084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.908128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.908271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.908315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.908435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.908466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.908592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.908637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.908790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.908816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.908945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.908973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.909144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.909174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.909290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.909319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.909425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.909454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.909571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.909605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.909743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.909772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.909873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.909900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.910020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.910046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.910178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.910205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.910335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.910375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.910538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.910567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.910683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.910712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.910871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.910900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.911057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.911114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.911209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.911235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.911347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.911376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.911537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.911566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.911699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.911728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.911840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.911870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.911992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.912018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.912121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.912148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.912276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.912305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.912444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.912474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.912587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.912617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.912789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.912836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.912956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.912982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.913123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.913150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.913292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.913322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.913498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.913544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.213 qpair failed and we were unable to recover it. 00:34:19.213 [2024-07-25 19:07:30.913660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.213 [2024-07-25 19:07:30.913690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.913814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.913842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.913952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.913982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.914116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.914146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.914262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.914292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.914405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.914434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.914572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.914602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.914737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.914766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.914904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.914933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.915076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.915121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.915236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.915265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.915415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.915444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.915559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.915589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.915754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.915799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.915920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.915947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.916075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.916102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.916210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.916237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.916365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.916392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.916519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.916545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.916681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.916709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.916859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.916886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.916983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.917010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.917154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.917184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.917291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.917321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.917456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.917485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.917647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.917677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.917796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.917824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.917930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.917957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.918089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.918116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.918265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.918296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.918417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.918444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.918574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.918601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.214 qpair failed and we were unable to recover it. 00:34:19.214 [2024-07-25 19:07:30.918724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.214 [2024-07-25 19:07:30.918751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.918855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.918882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.918978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.919006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.919132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.919162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.919318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.919361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.919489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.919518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.919677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.919706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.919826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.919855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.919998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.920026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.920161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.920189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.920312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.920342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.920481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.920526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.920700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.920745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.920844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.920871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.921007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.921035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.921171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.921198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.921320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.921349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.921469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.921498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.921625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.921655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.921784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.921813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.921916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.921942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.922071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.922097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.922250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.922277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.922425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.922453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.922611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.922645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.922793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.922822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.922991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.923018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.923124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.923151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.923268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.923294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.923419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.923448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.923593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.923622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.923778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.923807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.923913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.923942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.924141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.924167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.924267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.924293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.924439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.924467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.924611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.924640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.924781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.924810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.924924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.924953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.215 qpair failed and we were unable to recover it. 00:34:19.215 [2024-07-25 19:07:30.925110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.215 [2024-07-25 19:07:30.925137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.925249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.925275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.925419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.925466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.925643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.925689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.925877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.925922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.926068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.926095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.926225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.926252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.926399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.926428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.926539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.926567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.926745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.926790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.926943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.926970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.927078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.927106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.927253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.927287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.927409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.927438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.927545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.927574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.927680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.927711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.927850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.927879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.927981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.928008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.928111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.928137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.928240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.928267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.928384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.928414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.928540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.928569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.928699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.928728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.928831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.928861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.928995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.929024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.929197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.929223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.929326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.929353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.929457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.929483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.929648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.929675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.929890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.929919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.930050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.930084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.930204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.930231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.930364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.930393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.930496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.930525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.930657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.930686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.930822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.930862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.930971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.931000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.931128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.931155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.931306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.931352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.931481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.931531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.931663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.931689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.931786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.931813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.931913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.931939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.932072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.216 [2024-07-25 19:07:30.932118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.216 qpair failed and we were unable to recover it. 00:34:19.216 [2024-07-25 19:07:30.932237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.932266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.932378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.932407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.932520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.932548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.932689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.932720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.932869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.932896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.933009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.933036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.933151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.933182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.933316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.933345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.933455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.933484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.933605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.933635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.933768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.933797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.933903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.933932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.934095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.934123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.934245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.934290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.934431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.934480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.934635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.934664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.934834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.934860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.934971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.934998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.935133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.935160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.935258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.935284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.935376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.935419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.935587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.935613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.935715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.935745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.935843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.935870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.935997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.936025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.936211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.936256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.936397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.936441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.936562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.936605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.936782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.936826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.936983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.937009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.937162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.937193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.937305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.937334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.937469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.937499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.937633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.937662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.937780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.937809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.937945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.937974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.938139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.938166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.938286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.938334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.938480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.938524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.938672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.938715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.938827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.938854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.938961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.938987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.939095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.217 [2024-07-25 19:07:30.939123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.217 qpair failed and we were unable to recover it. 00:34:19.217 [2024-07-25 19:07:30.939230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.939256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.939366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.939395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.939539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.939566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.939689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.939715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.939850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.939876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.940004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.940030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.940172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.940202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.940347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.940376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.940541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.940570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.940735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.940763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.940881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.940910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.941057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.941089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.941195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.941221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.941317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.941361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.941564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.941593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.941707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.941736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.941870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.941898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.942048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.942080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.942183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.942209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.942370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.942397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.942542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.942600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.942737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.942785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.942922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.942948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.943043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.943079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.943231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.943261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.943435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.943479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.943659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.943703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.943829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.943858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.944083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.944111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.944283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.944327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.944480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.944524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.944707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.944737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.944882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.944908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.945025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.945081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.945229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.945258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.945409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.945437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.945569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.945595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.945712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.945742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.945908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.945935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.946029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.946070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.946201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.218 [2024-07-25 19:07:30.946230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.218 qpair failed and we were unable to recover it. 00:34:19.218 [2024-07-25 19:07:30.946341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.946378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.946513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.946542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.946674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.946703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.946868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.946900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.947021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.947048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.947207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.947252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.947382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.947425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.947552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.947579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.947695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.947725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.947855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.947883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.948012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.948038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.948190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.948219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.948380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.948410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.948539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.948568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.948687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.948716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.948902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.948953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.949046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.949081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.949207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.949251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.949471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.949514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.949661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.949710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.949838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.949866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.949989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.950016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.950167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.950212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.950327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.950382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.950522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.950567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.950708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.950734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.950838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.950864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.951014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.951040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.951200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.951229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.951371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.951415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.951589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.951633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.951772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.951798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.951956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.951982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.952082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.952110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.952260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.952304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.952458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.952503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.952611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.952656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.952809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.952835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.952968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.952994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.953098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.953143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.953291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.953320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.953470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.953514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.953629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.953659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.953825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.953854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.954028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.954072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.954272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.219 [2024-07-25 19:07:30.954300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.219 qpair failed and we were unable to recover it. 00:34:19.219 [2024-07-25 19:07:30.954413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.954447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.954562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.954591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.954730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.954759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.954887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.954916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.955030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.955083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.955230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.955258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.955370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.955409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.955545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.955574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.955689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.955736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.955911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.955938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.956101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.956128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.956273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.956318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.956494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.956541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.956692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.956736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.956870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.956898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.957029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.957055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.957177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.957205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.957345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.957374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.957490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.957519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.957640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.957682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.957836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.957862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.957988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.958015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.958110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.958137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.958301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.958330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.958438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.958467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.958600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.958629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.958793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.958840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.958968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.958999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.959124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.959152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.959294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.959341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.959514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.959558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.959695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.959724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.959867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.959894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.960038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.960071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.960196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.960223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.960365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.960394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.960503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.960531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.960638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.960667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.960837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.960886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.961016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.961042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.961168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.961195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.961352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.961379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.961529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.961556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.961683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.961710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.961833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.220 [2024-07-25 19:07:30.961860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.220 qpair failed and we were unable to recover it. 00:34:19.220 [2024-07-25 19:07:30.962015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.962041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.962199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.962228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.962381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.962410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.962527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.962557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.962696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.962726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.962912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.962957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.963087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.963115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.963264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.963308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.963429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.963474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.963612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.963660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.963814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.963841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.963935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.963963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.964068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.964111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.964283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.964312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.964436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.964462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.964613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.964642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.964784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.964813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.964936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.964964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.965135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.965165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.965326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.965370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.965520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.965566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.965719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.965745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.965873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.965900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.966003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.966031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.966223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.966272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.966445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.966492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.966664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.966710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.966866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.966892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.967017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.967045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.967208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.967239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.967358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.967387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.967528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.967556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.967723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.967810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.967956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.967982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.968113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.968140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.968306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.968335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.968496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.968529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.968669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.968698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.968834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.968863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.969003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.969032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.969176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.969202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.969293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.969331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.969477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.969521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.969690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.969734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.969911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.969980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.970140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.970167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.221 [2024-07-25 19:07:30.970290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.221 [2024-07-25 19:07:30.970334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.221 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.970528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.970558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.970685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.970729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.970883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.970910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.971042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.971075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.971221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.971266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.971404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.971448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.971694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.971746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.971902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.971929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.972027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.972053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.972204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.972250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.972403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.972448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.972598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.972643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.972801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.972829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.972939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.972964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.973069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.973111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.973283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.973311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.973427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.973456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.977214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.977255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.977418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.977447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.977596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.977626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.977743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.977771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.977893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.977919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.978084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.978111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.978263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.978289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.978439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.978468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.978607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.978636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.978750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.978792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.978904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.978930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.979081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.979116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.979245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.979272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.979442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.979511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.979642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.979671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.979780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.979809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.979981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.980008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.980150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.980176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.980281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.980308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.222 [2024-07-25 19:07:30.980440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.222 [2024-07-25 19:07:30.980486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.222 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.980605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.980631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.980801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.980830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.980974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.981000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.981100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.981126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.981257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.981283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.981426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.981454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.981584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.981617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.981754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.981782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.981967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.982007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.982188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.982216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.982399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.982443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.982594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.982640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.982884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.982928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.983081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.983109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.983256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.983286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.983482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.983527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.983680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.983706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.983812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.983837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.983966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.983993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.984102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.984130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.984308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.984334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.984454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.984481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.984593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.984620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.984721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.984748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.984872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.984899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.984994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.985020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.985157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.985185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.985314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.985340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.985434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.985460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.985617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.985645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.985819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.985845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.986029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.986067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.223 qpair failed and we were unable to recover it. 00:34:19.223 [2024-07-25 19:07:30.986240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.223 [2024-07-25 19:07:30.986270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.986454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.986502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.986625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.986673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.986843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.986884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.987009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.987035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.987187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.987239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.987380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.987409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.987539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.987583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.987758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.987784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.987919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.987945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.988072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.988099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.988247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.988291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.988438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.988483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.988696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.988736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.988878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.988908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.989070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.989113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.989209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.989237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.989358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.989401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.989599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.989641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.989798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.989824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.989988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.990015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.990138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.990169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.990313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.990357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.990540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.990584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.990695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.990723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.990827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.990853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.990950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.990976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.991096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.991141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.991289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.991318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.991461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.991490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.991618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.991643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.991821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.991850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.991981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.992010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.992179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.992207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.992362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.992391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.992529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.992559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.224 qpair failed and we were unable to recover it. 00:34:19.224 [2024-07-25 19:07:30.992722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.224 [2024-07-25 19:07:30.992751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.992928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.992955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.993082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.993109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.993209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.993235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.993345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.993374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.993542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.993575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.993681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.993710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.993848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.993878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.994012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.994041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.994211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.994252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.994434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.994481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.994605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.994650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.994825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.994868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.994967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.994994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.995091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.995118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.995245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.995271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.995386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.995431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.995581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.995626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.995733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.995761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.995896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.995922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.996048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.996082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.996288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.996317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.996458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.996488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.996602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.996631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.996794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.996822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.996924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.996953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.997053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.997104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.997254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.997280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.997415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.997445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.997559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.997588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.997727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.997756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.997892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.997921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.998069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.998100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.998236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.998263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.998410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.998438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.225 [2024-07-25 19:07:30.998581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.225 [2024-07-25 19:07:30.998610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.225 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:30.998750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:30.998779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:30.998942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:30.998971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:30.999086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:30.999131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:30.999254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:30.999283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:30.999450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:30.999479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:30.999613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:30.999642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:30.999739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:30.999768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:30.999899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:30.999928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.000035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.000070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.000218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.000245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.000405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.000449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.000608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.000653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.000900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.000930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.001075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.001121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.001256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.001283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.001428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.001459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.001632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.001662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.001828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.001858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.001998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.002024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.002183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.002211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.002365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.002391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.002507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.002537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.002670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.002699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.002839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.002875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.003023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.003049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.003181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.003208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.003329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.003355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.003489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.003518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.003621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.003650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.003793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.003822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.003949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.003979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.004129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.004156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.004281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.004307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.004449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.004478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.004634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.004663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.226 [2024-07-25 19:07:31.004798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.226 [2024-07-25 19:07:31.004827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.226 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.004964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.004990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.005098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.005139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.005277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.005305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.005485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.005515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.005656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.005685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.005814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.005844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.005977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.006005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.006143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.006170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.006300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.006342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.006531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.006590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.006797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.006826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.007000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.007029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.007184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.007210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.007338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.007364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.007487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.007513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.007639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.007665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.007798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.007841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.007942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.007985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.008113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.008140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.008272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.008298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.008421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.008448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.008627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.008654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.008786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.008813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.008941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.008967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.009078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.009105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.009197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.009223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.009336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.227 [2024-07-25 19:07:31.009365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.227 qpair failed and we were unable to recover it. 00:34:19.227 [2024-07-25 19:07:31.009506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.009532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.009632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.009662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.009770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.009796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.009885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.009911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.010043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.010074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.010173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.010199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.010329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.010355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.010479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.010506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.010598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.010624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.010768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.010808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.011008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.011052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.011185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.011214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.011342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.011369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.011510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.011539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.011645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.011676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.011818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.011848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.011973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.012003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.012146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.012173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.012303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.012330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.012473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.012518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.012665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.012708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.012857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.012884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.013037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.013069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.013211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.013238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.013380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.013409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.013604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.013632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.013773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.013801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.013946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.013975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.014123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.014151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.014299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.014343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.014534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.014563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.014750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.014792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.014924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.014951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.015111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.015157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.015327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.015357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.015491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.228 [2024-07-25 19:07:31.015520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.228 qpair failed and we were unable to recover it. 00:34:19.228 [2024-07-25 19:07:31.015632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.015661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.015801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.015830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.015993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.016022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.016173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.016201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.016388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.016432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.016580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.016623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.016823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.016850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.016977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.017004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.017178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.017223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.017367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.017414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.017524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.017552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.017696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.017723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.017828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.017855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.017992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.018033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.018206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.018236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.018406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.018436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.018565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.018594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.018730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.018759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.018892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.018919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.019093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.019133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.019239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.019266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.019420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.019449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.019589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.019618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.019763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.019793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.019904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.019932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.020140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.020169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.020301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.020345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.020485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.020515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.020656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.020685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.020831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.020861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.021018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.021071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.021226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.021253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.229 [2024-07-25 19:07:31.021384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.229 [2024-07-25 19:07:31.021434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.229 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.021610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.021664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.021891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.021945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.022087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.022129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.022264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.022290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.022387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.022414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.022533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.022559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.022708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.022736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.022909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.022938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.023055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.023106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.023244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.023270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.023390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.023416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.023541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.023567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.023714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.023743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.023916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.023945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.024082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.024124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.230 [2024-07-25 19:07:31.024216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.230 [2024-07-25 19:07:31.024242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.230 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.024382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.024409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.024549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.024579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.024692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.024730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.024916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.024976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.025139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.025168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.025312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.025367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.025575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.025627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.025851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.025903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.026008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.026034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.026166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.026194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.026318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.026344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.026482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.026511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.026704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.026734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.026865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.026894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.027002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.027031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.027203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.027243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.027395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.027440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.027684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.027736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.027969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.028019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.028179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.028225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.028349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.028376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.028552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.028596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.028746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.028773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.517 qpair failed and we were unable to recover it. 00:34:19.517 [2024-07-25 19:07:31.028925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.517 [2024-07-25 19:07:31.028952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.029093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.029122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.029257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.029283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.029445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.029471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.029616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.029678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.029791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.029821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.029951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.029981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.030153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.030181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.030328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.030371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.030513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.030560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.030735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.030783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.030881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.030909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.031040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.031073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.031203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.031230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.031359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.031391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.031522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.031549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.031681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.031708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.031856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.031882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.031990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.032030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.032172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.032200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.032355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.032382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.032505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.032532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.032713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.032759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.032890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.032917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.033048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.033082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.033221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.033265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.033405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.033448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.033621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.033695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.033831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.033858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.033954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.033980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.034152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.034198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.034362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.034391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.034520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.034548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.034708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.034734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.034856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.034883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.034970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.034996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.035131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.035159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.035288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.518 [2024-07-25 19:07:31.035335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.518 qpair failed and we were unable to recover it. 00:34:19.518 [2024-07-25 19:07:31.035476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.035506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.035694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.035741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.035903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.035930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.036070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.036097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.036219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.036264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.036445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.036488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.036638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.036681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.036797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.036838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.036946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.036975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.037125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.037156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.037320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.037350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.037523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.037552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.037663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.037694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.037837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.037865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.038023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.038066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.038216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.038265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.038446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.038490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.038657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.038708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.038811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.038847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.038993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.039020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.039175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.039220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.039370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.039414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.039565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.039608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.039768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.039795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.039913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.039940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.040073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.040100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.040239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.040284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.040404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.040448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.040596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.040640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.040771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.040798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.040927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.040953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.041120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.041150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.041295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.041339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.041496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.041527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.041664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.041693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.041815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.041842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.041973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.041999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.519 [2024-07-25 19:07:31.042158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.519 [2024-07-25 19:07:31.042184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.519 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.042315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.042344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.042484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.042513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.042675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.042704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.042818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.042848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.042985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.043015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.043151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.043183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.043306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.043332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.043523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.043552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.043700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.043729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.043911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.043940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.044081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.044124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.044227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.044253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.044350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.044392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.044528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.044557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.044687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.044716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.044851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.044880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.045030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.045056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.045215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.045241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.045364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.045393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.045550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.045594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.045709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.045740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.045884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.045914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.046084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.046127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.046256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.046282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.046399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.046427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.046594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.046624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.046763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.046793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.046916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.046943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.047042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.047073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.047202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.047228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.047378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.047405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.047589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.047618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.047762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.047791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.047978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.048004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.048124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.048151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.520 [2024-07-25 19:07:31.048245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.520 [2024-07-25 19:07:31.048288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.520 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.048486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.048542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.048703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.048756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.048906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.048935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.049047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.049077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.049183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.049209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.049349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.049377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.049494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.049536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.049733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.049792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.049905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.049934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.050053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.050084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.050227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.050266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.050433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.050465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.050609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.050638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.050815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.050844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.050956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.050986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.051129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.051155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.051277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.051304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.051426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.051455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.051622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.051651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.051780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.051808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.051965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.051994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.052140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.052167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.052260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.052286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.052396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.052439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.052593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.052622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.052758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.052787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.052896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.052925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.053082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.053122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.053281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.053308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.053422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.053467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.053608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.053652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.053805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.053831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.053954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.053980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.054128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.054172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.054351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.521 [2024-07-25 19:07:31.054399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.521 qpair failed and we were unable to recover it. 00:34:19.521 [2024-07-25 19:07:31.054542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.054586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.054725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.054768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.054902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.054929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.055056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.055089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.055239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.055282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.055455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.055500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.055684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.055728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.055834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.055861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.056002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.056041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.056234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.056265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.056478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.056527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.056692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.056747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.056914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.056943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.057068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.057097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.057249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.057293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.057438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.057486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.057633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.057676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.057838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.057865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.057973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.058001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.058192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.058238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.058335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.058372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.058559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.058586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.058714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.058740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.058857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.058897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.059045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.059085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.059212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.059239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.059386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.059415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.059587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.059616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.059798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.059858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.059994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.060020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.060136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.060163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.060292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.060318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.060448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.522 [2024-07-25 19:07:31.060490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.522 qpair failed and we were unable to recover it. 00:34:19.522 [2024-07-25 19:07:31.060630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.060658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.060786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.060815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.060945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.060974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.061111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.061138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.061238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.061264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.061378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.061407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.061554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.061580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.061734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.061762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.061875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.061903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.062071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.062120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.062225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.062251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.062347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.062376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.062545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.062575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.062740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.062769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.062918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.062944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.063038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.063069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.063162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.063188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.063311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.063341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.063487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.063516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.063621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.063650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.063758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.063787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.063950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.063978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.064118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.064144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.064295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.064334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.064450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.064495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.064658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.064688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.064855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.064900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.065025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.065052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.065184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.065211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.065340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.065366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.065491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.065518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.065647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.065674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.065798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.065825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.065982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.066009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.066164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.066210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.066455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.066511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.066657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.066690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.066847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.523 [2024-07-25 19:07:31.066873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.523 qpair failed and we were unable to recover it. 00:34:19.523 [2024-07-25 19:07:31.066992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.067018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.067166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.067193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.067360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.067389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.067590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.067619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.067783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.067812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.067949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.067978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.068120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.068146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.068277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.068304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.068449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.068477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.068598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.068640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.068785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.068813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.068943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.068971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.069148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.069189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.069318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.069346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.069482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.069531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.069676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.069719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.069868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.069911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.070071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.070098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.070248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.070293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.070456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.070484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.070657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.070707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.070815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.070843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.070996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.071021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.071136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.071178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.071340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.071369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.071508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.071541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.071706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.071734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.071861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.071889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.071990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.072017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.072177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.072220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.072360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.072409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.072548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.072592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.072738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.072780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.072912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.072939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.073071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.073115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.073258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.073286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.073453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.073482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.524 [2024-07-25 19:07:31.073645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.524 [2024-07-25 19:07:31.073674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.524 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.073813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.073840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.073984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.074010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.074134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.074161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.074288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.074312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.074473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.074502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.074622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.074663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.074781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.074809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.074924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.074950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.075082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.075109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.075262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.075288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.075406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.075435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.075604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.075632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.075771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.075800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.075936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.075964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.076103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.076129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.076253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.076279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.076425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.076452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.076563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.076604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.076715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.076743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.076904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.076932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.077072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.077117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.077243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.077268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.077420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.077445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.077549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.077575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.077703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.077728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.077875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.077904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.078069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.078129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.078231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.078259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.078453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.078516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.078648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.078693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.078868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.078913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.079017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.079044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.079192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.079219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.079391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.079420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.079668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.079697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.079871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.079897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.080017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.080042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.525 qpair failed and we were unable to recover it. 00:34:19.525 [2024-07-25 19:07:31.080187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.525 [2024-07-25 19:07:31.080227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.080355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.080387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.080527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.080556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.080730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.080760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.080888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.080928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.081043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.081077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.081207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.081252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.081397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.081441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.081584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.081630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.081862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.081912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.082039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.082076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.082190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.082234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.082353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.082380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.082483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.082511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.082629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.082656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.082808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.082834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.082961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.082988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.083146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.083195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.083330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.083357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.083533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.083562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.083706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.083732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.083835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.083863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.083989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.084016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.084168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.084213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.084378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.084422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.084554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.084597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.084743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.084772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.084911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.084936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.085024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.085049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.085188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.085217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.085390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.085421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.085581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.085631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.526 [2024-07-25 19:07:31.085763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.526 [2024-07-25 19:07:31.085790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.526 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.085914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.085941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.086084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.086110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.086236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.086283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.086438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.086481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.086648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.086676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.086828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.086855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.086974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.087001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.087144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.087188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.087346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.087389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.087561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.087617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.087755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.087781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.087939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.087966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.088116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.088162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.088337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.088385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.088565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.088612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.088758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.088787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.088941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.088967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.089070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.089112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.089292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.089321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.089484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.089553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.089682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.089710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.089857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.089886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.090050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.090102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.090223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.090251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.090477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.090505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.090673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.090708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.090816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.090845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.091022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.091048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.091174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.091214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.091350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.091378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.091567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.091598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.091711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.091742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.091886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.091915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.092085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.092113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.092223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.092250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.092383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.092410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.092583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.527 [2024-07-25 19:07:31.092612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.527 qpair failed and we were unable to recover it. 00:34:19.527 [2024-07-25 19:07:31.092748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.092778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.092941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.092971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.093152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.093179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.093352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.093381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.093518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.093548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.093650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.093679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.093796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.093825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.093932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.093963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.094097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.094125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.094284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.094311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.094500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.094529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.094735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.094765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.094886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.094931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.095069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.095113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.095232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.095259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.095392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.095436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.095599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.095626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.095803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.095833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.096000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.096030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.096211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.096238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.096366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.096393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.096519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.096564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.096678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.096709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.096833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.096878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.097057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.097091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.097237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.097264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.097411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.097440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.097605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.097634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.097808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.097842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.097984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.098011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.098148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.098175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.098302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.098329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.098497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.098526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.098634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.098663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.098778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.098808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.098939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.098979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.099119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.528 [2024-07-25 19:07:31.099148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.528 qpair failed and we were unable to recover it. 00:34:19.528 [2024-07-25 19:07:31.099301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.099328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.099472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.099518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.099646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.099672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.099769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.099797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.099931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.099958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.100149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.100193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.100310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.100341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.100477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.100506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.100627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.100671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.100838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.100867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.101006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.101034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.101165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.101191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.101383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.101409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.101510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.101535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.101682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.101707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.101862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.101889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.101989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.102015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.102118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.102145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.102272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.102303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.102449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.102477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.102612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.102640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.102755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.102783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.102907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.102933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.103055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.103105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.103206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.103232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.103350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.103379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.103504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.103547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.103664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.103693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.103844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.103888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.104020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.104048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.104183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.104210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.104378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.104408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.104551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.104580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.104696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.104725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.104866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.104897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.105039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.105072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.105193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.105219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.529 [2024-07-25 19:07:31.105319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.529 [2024-07-25 19:07:31.105346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.529 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.105460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.105490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.105653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.105682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.105793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.105824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.106008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.106035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.106179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.106206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.106305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.106332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.106484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.106510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.106608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.106656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.106798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.106827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.106967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.106993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.107108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.107134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.107287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.107313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.107442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.107468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.107635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.107664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.107801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.107829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.107932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.107961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.108117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.108144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.108266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.108292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.108410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.108439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.108573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.108601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.108713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.108742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.108882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.108911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.109016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.109045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.109200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.109227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.109368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.109408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.109563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.109608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.109746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.109790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.109926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.109952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.110086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.110114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.110263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.110307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.110434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.110460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.110635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.110662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.110760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.110786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.110946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.110973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.111066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.111096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.111215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.111245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.111396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.111447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.530 [2024-07-25 19:07:31.111589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.530 [2024-07-25 19:07:31.111618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.530 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.111767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.111793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.111996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.112022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.112134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.112162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.112285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.112312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.112460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.112489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.112623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.112653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.112816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.112845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.113022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.113051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.113211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.113237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.113383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.113431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.113700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.113763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.113893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.113919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.531 [2024-07-25 19:07:31.114081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.531 [2024-07-25 19:07:31.114109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.531 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.114280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.114324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.114471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.114514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.114664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.114708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.114834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.114862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.115014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.115040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.115144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.115171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.115273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.115302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.115449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.115479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.115613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.115642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.115784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.115853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.115985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.116017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.116172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.116217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.116361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.116405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.116541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.116585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.116709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.116736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.116891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.116918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.117081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.117108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.117238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.117264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.117413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.117439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.117601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.117627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.117755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.117781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.117919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.117948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.118048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.118082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.118226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.118252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.118402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.118431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.118595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.118623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.118764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.118792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.118941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.118969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.119100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.119128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.119276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.119319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.119434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.119463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.119624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.119667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.119813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.119859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.119992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.120019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.120144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.120175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.120339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.120368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.120505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.120534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.532 qpair failed and we were unable to recover it. 00:34:19.532 [2024-07-25 19:07:31.120665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.532 [2024-07-25 19:07:31.120699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.120892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.120937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.121071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.121100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.121194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.121222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.121365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.121395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.121536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.121580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.121746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.121776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.121924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.121951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.122070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.122097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.122223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.122249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.122343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.122386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.122539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.122591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.122733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.122762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.122889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.122918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.123079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.123106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.123226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.123252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.123396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.123425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.123552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.123609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.123744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.123773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.123883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.123915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.124071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.124098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.124228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.124255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.124379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.124406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.124558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.124587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.124718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.124747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.124892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.124922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.125097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.125123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.125251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.125282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.125424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.125453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.125651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.125680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.125817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.125846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.125981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.126010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.126178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.126205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.126298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.126324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.126513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.126539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.126718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.126747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.126914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.126943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.127128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.533 [2024-07-25 19:07:31.127154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.533 qpair failed and we were unable to recover it. 00:34:19.533 [2024-07-25 19:07:31.127308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.127334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.127458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.127503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.127633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.127662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.127844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.127874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.128000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.128028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.128191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.128218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.128343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.128369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.128520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.128559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.128793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.128821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.128992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.129020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.129147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.129174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.129270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.129296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.129437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.129463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.129560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.129587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.129735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.129763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.129928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.129957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.130097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.130147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.130267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.130294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.130409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.130451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.130601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.130627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.130832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.130861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.130976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.131002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.131163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.131189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.131286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.131312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.131431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.131457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.131582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.131612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.131773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.131801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.131956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.131982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.132104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.132130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.132258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.132285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.132394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.132421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.132570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.132598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.132725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.132767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.132936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.132965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.133085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.133126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.133252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.133279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.133403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.133429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.133526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.534 [2024-07-25 19:07:31.133568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.534 qpair failed and we were unable to recover it. 00:34:19.534 [2024-07-25 19:07:31.133706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.133734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.133869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.133913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.134072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.134132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.134276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.134304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.134491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.134520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.134775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.134838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.134974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.135002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.135151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.135179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.135281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.135309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.135456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.135482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.135580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.135607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.135741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.135767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.135884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.135912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.136043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.136079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.136189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.136215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.136335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.136367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.136465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.136491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.136588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.136615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.136769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.136796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.136949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.136978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.137152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.137192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.137333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.137363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.137497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.137541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.137748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.137810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.137929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.137972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.138158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.138186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.138359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.138385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.138480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.138507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.138672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.138697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.138835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.138864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.138986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.139012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.139138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.139165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.139268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.139294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.139435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.139462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.139582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.139608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.139771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.139800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.139923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.535 [2024-07-25 19:07:31.139954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.535 qpair failed and we were unable to recover it. 00:34:19.535 [2024-07-25 19:07:31.140131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.140171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.140308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.140351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.140510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.140538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.140666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.140693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.140985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.141039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.141164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.141190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.141299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.141324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.141456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.141482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.141633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.141659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.141795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.141823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.141949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.141976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.142089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.142116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.142272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.142299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.142456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.142484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.142639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.142666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.142883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.142938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.143073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.143118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.143241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.143268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.143409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.143436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.143544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.143571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.143665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.143691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.143845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.143887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.143999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.144046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.144198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.144225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.144316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.144342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.144503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.144554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.144665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.144691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.144824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.144852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.144977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.145007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.145189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.145216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.145318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.145372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.145592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.536 [2024-07-25 19:07:31.145643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.536 qpair failed and we were unable to recover it. 00:34:19.536 [2024-07-25 19:07:31.145795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.145821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.145971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.146015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.146131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.146158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.146283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.146310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.146455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.146500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.146687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.146751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.146915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.146943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.147084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.147122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.147253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.147279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.147384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.147410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.147541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.147568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.147713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.147741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.147901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.147929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.148077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.148121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.148275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.148302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.148440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.148467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.148595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.148621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.148742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.148778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.148975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.149004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.149160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.149187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.149309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.149336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.149436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.149462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.149623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.149650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.149777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.149820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.150000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.150029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.150190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.150217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.150340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.150367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.150488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.150515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.150685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.150714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.150860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.150892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.151037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.151077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.151205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.151231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.151376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.151405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.151581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.151607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.151711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.151752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.151892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.151922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.152047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.537 [2024-07-25 19:07:31.152079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.537 qpair failed and we were unable to recover it. 00:34:19.537 [2024-07-25 19:07:31.152208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.152234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.152329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.152364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.152552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.152578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.152703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.152730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.152863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.152890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.153051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.153102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.153267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.153294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.153406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.153435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.153575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.153602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.153770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.153800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.153965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.153995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.154113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.154140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.154296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.154322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.154463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.154529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.154664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.154693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.154800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.154829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.154949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.154978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.155153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.155193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.155326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.155355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.155499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.155528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.155690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.155738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.155946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.155999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.156136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.156165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.156299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.156326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.156473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.156503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.156680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.156709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.156873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.156902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.157037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.157071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.157225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.157251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.157351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.157378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.157525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.157556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.157681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.157723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.157863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.157891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.158030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.158066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.158182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.158209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.158349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.158379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.158516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.158545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.538 qpair failed and we were unable to recover it. 00:34:19.538 [2024-07-25 19:07:31.158697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.538 [2024-07-25 19:07:31.158727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.158894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.158923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.159052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.159091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.159232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.159258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.159385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.159429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.159575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.159604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.159740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.159769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.159940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.159968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.160119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.160146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.160295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.160325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.160569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.160623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.160784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.160813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.160924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.160951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.161075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.161102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.161232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.161258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.161420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.161477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.161661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.161707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.161849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.161878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.162023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.162066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.162184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.162214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.162417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.162460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.162598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.162642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.162772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.162799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.162938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.162969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.163105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.163135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.163272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.163317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.163414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.163441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.163539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.163566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.163699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.163725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.163843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.163870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.163990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.164017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.164201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.164231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.164399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.164441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.164586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.164630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.164727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.164754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.164855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.164882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.165034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.165068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.539 [2024-07-25 19:07:31.165234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.539 [2024-07-25 19:07:31.165278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.539 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.165455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.165482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.165655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.165699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.165796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.165823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.165956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.165982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.166148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.166193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.166365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.166409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.166560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.166589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.166700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.166726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.166853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.166879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.166975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.167001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.167141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.167185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.167299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.167344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.167475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.167501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.167630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.167657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.167788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.167815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.167955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.167995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.168162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.168205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.168377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.168407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.168608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.168662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.168891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.168943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.169064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.169108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.169258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.169288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.169460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.169489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.169670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.169718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.169843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.169869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.170023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.170068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.170223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.170249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.170424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.170486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.170644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.170673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.170815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.170844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.170968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.170996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.171130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.171157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.171346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.171393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.171543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.171586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.171734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.171777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.171907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.171934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.172068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.540 [2024-07-25 19:07:31.172095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.540 qpair failed and we were unable to recover it. 00:34:19.540 [2024-07-25 19:07:31.172240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.172285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.172469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.172495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.172629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.172656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.172784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.172811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.172931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.172958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.173090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.173117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.173252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.173292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.173449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.173476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.173600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.173627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.173755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.173782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.173869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.173895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.174035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.174072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.174213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.174242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.174387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.174416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.174561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.174591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.174725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.174755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.174901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.174928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.175021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.175049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.175210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.175236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.175356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.175383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.175534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.175561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.175697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.175724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.175846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.175873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.176008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.176035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.176201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.176232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.176372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.176401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.176541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.176570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.176780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.176832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.176969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.176998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.177168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.177198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.177362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.177427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.177603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.177632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.541 [2024-07-25 19:07:31.177762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.541 [2024-07-25 19:07:31.177791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.541 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.177943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.177970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.178083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.178111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.178267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.178294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.178405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.178448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.178596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.178640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.178786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.178831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.178932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.178958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.179090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.179117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.179227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.179253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.179413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.179441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.179590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.179616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.179735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.179761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.179906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.179931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.180078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.180105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.180201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.180227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.180378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.180404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.180509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.180535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.180657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.180683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.180826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.180871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.180976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.181002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.181141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.181185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.181330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.181386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.181518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.181544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.181665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.181709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.181860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.181887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.182047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.182084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.182221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.182266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.182404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.182448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.182596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.182639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.182789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.182832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.182970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.182998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.183126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.183152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.183322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.183351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.183461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.183490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.183627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.183655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.183785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.183814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.183962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.183988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.542 [2024-07-25 19:07:31.184084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.542 [2024-07-25 19:07:31.184111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.542 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.184238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.184265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.184386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.184435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.184614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.184658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.184802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.184844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.184981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.185007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.185185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.185230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.185377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.185421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.185669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.185713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.185868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.185895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.185986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.186013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.186162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.186194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.186334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.186367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.186505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.186534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.186675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.186704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.186844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.186873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.187011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.187042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.187242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.187271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.187432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.187461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.187598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.187641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.187763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.187806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.187931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.187957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.188112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.188140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.188248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.188274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.188373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.188399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.188519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.188545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.188682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.188710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.188843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.188869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.188962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.188988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.189141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.189167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.189308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.189334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.189468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.189494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.189618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.189645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.189768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.189795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.189903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.189931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.190085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.190111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.190241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.190268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.190427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.190454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.543 qpair failed and we were unable to recover it. 00:34:19.543 [2024-07-25 19:07:31.190612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.543 [2024-07-25 19:07:31.190679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.190827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.190857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.191006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.191033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.191163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.191193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.191308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.191337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.191471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.191501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.191684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.191746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.191846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.191875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.192001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.192028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.192166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.192192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.192373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.192402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.192501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.192540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.192666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.192709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.192843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.192873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.193029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.193072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.193200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.193226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.193346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.193373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.193496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.193523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.193676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.193706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.193871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.193901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.194079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.194122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.194253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.194280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.194410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.194451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.194597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.194638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.194774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.194803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.194969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.194998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.195138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.195165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.195258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.195284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.195414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.195444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.195591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.195620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.195821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.195851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.195983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.196012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.196166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.196193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.196311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.196337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.196454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.196498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.544 [2024-07-25 19:07:31.196648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.544 [2024-07-25 19:07:31.196691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.544 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.196862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.196890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.197017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.197045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.197176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.197203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.197308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.197335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.197524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.197553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.197688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.197717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.197899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.197957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.198092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.198120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.198284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.198312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.198493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.198537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.198692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.198736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.198863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.198889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.199020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.199048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.199160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.199185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.199306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.199335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.199498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.199526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.199689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.199716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.199854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.199882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.200032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.200066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.200220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.200265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.200457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.200484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.200610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.200636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.200763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.200789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.200916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.200944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.201076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.201104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.201213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.201238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.201342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.201368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.201460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.201485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.201580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.201606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.201705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.201730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.201834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.201859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.202012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.202038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.545 [2024-07-25 19:07:31.202199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.545 [2024-07-25 19:07:31.202225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.545 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.202329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.202357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.202492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.202536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.202658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.202688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.202831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.202857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.203009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.203035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.203191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.203221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.203469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.203524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.203669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.203712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.203840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.203867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.204018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.204046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.204178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.204207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.204344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.204372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.204587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.204652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.204814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.204843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.204996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.205023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.205152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.205178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.205291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.205320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.205492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.205521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.205629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.205657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.205822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.205851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.205953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.205982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.206145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.206171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.206347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.206376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.206540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.206569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.206707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.206735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.206878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.206907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.207046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.207108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.207239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.207267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.207437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.207482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.207607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.207655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.207802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.207845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.207942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.207970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.208133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.546 [2024-07-25 19:07:31.208160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.546 qpair failed and we were unable to recover it. 00:34:19.546 [2024-07-25 19:07:31.208299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.208326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.208451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.208477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.208601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.208627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.208744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.208773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.208873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.208901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.209012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.209041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.209269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.209298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.209410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.209438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.209612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.209642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.209811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.209858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.209991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.210018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.210149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.210194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.210368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.210415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.210533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.210581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.210707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.210734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.210891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.210919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.211047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.211079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.211209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.211238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.211376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.211405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.211508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.211536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.211673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.211701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.211876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.211921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.212050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.212084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.212235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.212280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.212453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.212500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.212603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.212647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.212797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.212826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.212971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.212999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.213100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.213126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.213258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.213283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.213408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.213436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.213584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.213612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.213724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.213753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.213914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.213940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.214069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.214094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.214226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.214252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.214378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.547 [2024-07-25 19:07:31.214405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.547 qpair failed and we were unable to recover it. 00:34:19.547 [2024-07-25 19:07:31.214518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.214547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.214685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.214714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.214903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.214948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.215080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.215108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.215282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.215312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.215450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.215494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.215670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.215715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.215865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.215891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.216048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.216081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.216188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.216214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.216387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.216416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.216641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.216700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.216865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.216894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.217045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.217104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.217252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.217282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.217418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.217446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.217555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.217583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.217724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.217753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.217918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.217946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.218046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.218082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.218229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.218254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.218381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.218423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.218550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.218579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.218768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.218797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.218934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.218962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.219085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.219129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.219261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.219303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.219415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.219443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.219583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.219612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.219752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.219780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.219942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.219970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.220085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.220115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.220298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.220338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.220500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.220545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.220718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.220765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.220895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.220921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.548 qpair failed and we were unable to recover it. 00:34:19.548 [2024-07-25 19:07:31.221028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.548 [2024-07-25 19:07:31.221055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.221210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.221254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.221408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.221436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.221571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.221597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.221718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.221744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.221873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.221899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.222026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.222052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.222179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.222205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.222325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.222353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.222499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.222528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.222690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.222734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.222863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.222889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.223010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.223036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.223139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.223166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.223287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.223336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.223520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.223549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.223688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.223733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.223891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.223918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.224057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.224089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.224213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.224243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.224403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.224447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.224611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.224656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.224820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.224846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.224977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.225004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.225129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.225157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.225307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.225334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.225470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.225497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.225651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.225678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.225831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.225867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.225995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.226022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.226182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.226209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.226341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.226368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.226491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.226535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.226666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.226693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.226846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.226873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.227000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.227026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.227187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.549 [2024-07-25 19:07:31.227231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.549 qpair failed and we were unable to recover it. 00:34:19.549 [2024-07-25 19:07:31.227394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.227423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.227615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.227660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.227817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.227844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.228006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.228032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.228201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.228229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.228364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.228424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.228567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.228611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.228741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.228767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.228934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.228967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.229125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.229155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.229296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.229324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.229557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.229610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.229749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.229778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.229894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.229920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.230085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.230112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.230244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.230271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.230406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.230435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.230575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.230604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.230714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.230742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.230885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.230913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.231082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.231123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.231256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.231284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.231417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.231461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.231602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.231645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.231782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.231825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.231974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.232001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.232195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.232226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.232395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.232427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.232568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.232596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.232764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.232793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.232951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.232979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.233136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.550 [2024-07-25 19:07:31.233179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.550 qpair failed and we were unable to recover it. 00:34:19.550 [2024-07-25 19:07:31.233374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.233447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.233581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.233623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.233781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.233806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.233982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.234008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.234141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.234168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.234262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.234287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.234410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.234438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.234567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.234595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.234717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.234758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.234880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.234905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.235034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.235072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.235180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.235205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.235349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.235375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.235480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.235509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.235657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.235685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.235873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.235900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.236030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.236072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.236192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.236217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.236394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.236440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.236583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.236626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.236773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.236818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.236949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.236975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.237122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.237153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.237343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.237388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.237533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.237577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.237745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.237788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.237893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.237920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.238048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.238087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.238207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.238236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.238349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.238377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.238482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.238512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.238651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.238680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.238793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.238819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.238938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.238975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.239108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.239134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.239286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.239312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.551 qpair failed and we were unable to recover it. 00:34:19.551 [2024-07-25 19:07:31.239457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.551 [2024-07-25 19:07:31.239502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.239644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.239672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.239779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.239808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.239942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.239970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.240122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.240149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.240304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.240330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.240507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.240535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.240698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.240726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.240829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.240858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.240994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.241034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.241192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.241220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.241352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.241379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.241525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.241568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.241718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.241766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.241920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.241947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.242083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.242111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.242261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.242309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.242431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.242474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.242646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.242693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.242797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.242823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.242973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.243000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.243155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.243186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.243327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.243361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.243500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.243529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.243632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.243661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.243808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.243836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.243966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.243994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.244159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.244189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.244354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.244383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.244493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.244521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.244744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.244773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.244920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.244945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.245042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.245080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.245180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.245205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.245312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.245337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.245484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.245512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.245662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.245691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.245791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.245819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.245949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.245978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.552 [2024-07-25 19:07:31.246110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.552 [2024-07-25 19:07:31.246149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.552 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.246259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.246287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.246437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.246486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.246639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.246683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.246806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.246851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.246980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.247007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.247192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.247247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.247397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.247443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.247616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.247663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.247844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.247908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.248035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.248070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.248189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.248215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.248342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.248369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.248524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.248551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.248656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.248682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.248831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.248860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.248989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.249015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.249145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.249175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.249289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.249319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.249482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.249512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.249620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.249649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.249804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.249848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.250057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.250091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.250249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.250274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.250418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.250447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.250554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.250584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.250747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.250776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.250914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.250943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.251131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.251158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.251274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.251301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.251439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.251481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.251626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.251652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.251832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.251862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.252005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.252035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.252252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.252277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.252451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.252480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.252646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.252675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.252812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.252841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.252975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.253001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.253131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.253157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.253309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.253361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.553 [2024-07-25 19:07:31.253538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.553 [2024-07-25 19:07:31.253566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.553 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.253695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.253736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.253875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.253904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.254072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.254116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.254266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.254293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.254443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.254472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.254616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.254645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.254785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.254815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.254953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.254982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.255103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.255129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.255228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.255254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.255360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.255386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.255499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.255528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.255664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.255693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.255799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.255828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.255989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.256017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.256161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.256187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.256309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.256335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.256490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.256519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.256682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.256710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.256854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.256884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.257039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.257071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.257192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.257218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.257340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.257381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.257492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.257520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.257665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.257694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.257837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.257866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.258000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.258042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.258201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.258227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.258370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.258399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.258557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.258586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.258749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.258778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.258924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.258950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.259107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.259137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.259340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.259366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.259539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.259568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.259677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.259706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.259871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.259900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.260076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.260103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.260225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.260251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.554 [2024-07-25 19:07:31.260455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.554 [2024-07-25 19:07:31.260511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.554 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.260621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.260650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.260760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.260789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.260927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.260956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.261116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.261143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.261282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.261308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.261497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.261523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.261654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.261698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.261842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.261870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.262013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.262039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.262173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.262200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.262303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.262329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.262457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.262483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.262583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.262609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.262737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.262765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.262875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.262901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.263036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.263068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.263224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.263253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.263392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.263418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.263545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.263571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.263716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.263745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.263929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.263955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.264065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.264091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.264241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.264270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.264419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.264446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.264577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.264604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.264758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.264785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.264917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.264944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.265077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.265104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.265225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.265251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.265380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.265406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.265534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.265561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.265706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.265735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.265893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.265920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.266086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.266113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.266232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.266258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.266388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.555 [2024-07-25 19:07:31.266414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.555 qpair failed and we were unable to recover it. 00:34:19.555 [2024-07-25 19:07:31.266520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.266546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.266690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.266719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.266867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.266893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.267017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.267043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.267217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.267246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.267387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.267414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.267542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.267568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.267712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.267741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.267923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.267949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.268068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.268095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.268284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.268310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.268476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.268502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.268598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.268624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.268762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.268788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.268883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.268908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.269041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.269084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.269232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.269261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.269404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.269431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.269561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.269587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.269737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.269763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.269887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.269913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.270014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.270040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.270192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.270221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.270371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.270398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.270568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.270602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.270733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.270762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.270903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.270929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.271080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.271125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.271260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.271286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.271380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.271407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.271532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.271558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.271697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.271726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.271883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.271912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.272089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.272133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.272263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.272289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.272445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.272472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.272608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.272634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.272788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.272830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.272983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.273009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.273139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.273166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.273325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.273354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.273531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.273557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.273656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.273699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.273875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.556 [2024-07-25 19:07:31.273901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.556 qpair failed and we were unable to recover it. 00:34:19.556 [2024-07-25 19:07:31.274027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.274053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.274179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.274224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.274360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.274390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.274512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.274538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.274645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.274671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.274794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.274824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.274975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.275002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.275214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.275242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.275367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.275394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.275488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.275515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.275659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.275685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.275807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.275836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.276009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.276035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.276173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.276217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.276361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.276390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.276565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.276590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.276759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.276788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.276932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.276962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.277102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.277128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.277225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.277251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.277401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.277430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.277579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.277608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.277707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.277733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.277881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.277910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.278051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.278101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.278212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.278239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.278420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.278446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.278573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.278598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.278748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.278774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.278907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.278950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.279081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.279107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.279260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.279286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.279398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.279426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.279554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.279580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.279712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.279738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.279886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.279914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.280040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.280071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.280201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.280227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.280417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.280443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.280535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.280561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.280689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.280714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.280840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.280866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.280993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.281019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.557 [2024-07-25 19:07:31.281120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.557 [2024-07-25 19:07:31.281147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.557 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.281313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.281342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.281463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.281490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.281652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.281677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.281852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.281881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.282032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.282066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.282221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.282264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.282399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.282428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.282577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.282603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.282706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.282732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.282839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.282865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.283015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.283044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.283193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.283219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.283344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.283387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.283534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.283561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.283713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.283739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.283834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.283860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.283956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.283981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.284078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.284104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.284269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.284295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.284445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.284471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.284617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.284646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.284811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.284840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.284968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.284994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.285137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.285164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.285305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.285352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.285507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.285533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.285651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.285694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.285844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.285873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.286021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.286047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.286177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.286204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.286352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.286381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.286530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.286557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.286666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.286692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.286844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.286870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.286996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.287023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.287121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.287148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.287302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.287329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.287494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.287521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.287649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.287692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.287854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.287883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.288023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.288066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.288184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.288211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.288374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.288400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.288553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.288579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.288729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.288755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.558 qpair failed and we were unable to recover it. 00:34:19.558 [2024-07-25 19:07:31.288867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.558 [2024-07-25 19:07:31.288901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.289034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.289082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.289226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.289252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.289373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.289399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.289552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.289579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.289718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.289746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.289875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.289904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.290040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.290073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.290223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.290271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.290441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.290470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.290630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.290657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.290805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.290831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.290931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.290957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.291048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.291081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.291188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.291214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.291391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.291420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.291572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.291598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.291747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.291773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.291947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.291976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.292124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.292150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.292253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.292279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.292429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.292458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.292599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.292630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.292777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.292820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.292960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.292989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.293146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.293172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.293267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.293293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.293481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.293510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.293638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.293664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.293811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.293855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.294007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.294033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.294189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.294215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.294369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.294396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.294534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.294576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.294728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.294754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.294849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.294891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.295030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.295064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.295210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.295236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.295335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.295361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.295520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.295548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.295726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.295753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.559 [2024-07-25 19:07:31.295846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.559 [2024-07-25 19:07:31.295890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.559 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.296022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.296067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.296223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.296249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.296370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.296396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.296517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.296545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.296697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.296722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.296829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.296855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.296979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.297006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.297172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.297198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.297362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.297393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.297555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.297585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.297736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.297762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.297939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.297968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.298136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.298163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.298321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.298347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.298450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.298491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.298607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.298636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.298776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.298803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.298911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.298947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.299043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.299074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.299219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.299248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.299364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.299391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.299485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.299511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.299650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.299679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.299788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.299818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.299987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.300014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.300126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.300153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.300280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.300326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.300467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.300494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.300622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.300656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.300816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.300859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.301037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.301074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.301194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.301220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.301397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.301423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.301526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.301552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.301704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.301731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.301885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.301916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.302131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.302158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.302263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.302290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.302451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.302477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.302596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.302623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.302752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.302778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.302930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.302956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.303054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.303085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.303183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.303210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.303311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.303337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.303472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.303498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.303668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.560 [2024-07-25 19:07:31.303696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.560 qpair failed and we were unable to recover it. 00:34:19.560 [2024-07-25 19:07:31.303815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.303844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.303996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.304022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.304154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.304180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.304307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.304333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.304456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.304482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.304650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.304675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.304795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.304851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.304981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.305010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.305127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.305157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.305280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.305323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.305487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.305518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.305639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.305682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.305776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.305803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.305933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.305959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.306144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.306172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.306298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.306324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.306418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.306444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.306562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.306589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.306746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.306777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.306945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.306972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.307128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.307156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.307259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.307285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.307388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.307414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.307544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.307572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.307726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.307753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.307872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.307899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.308020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.308046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.308178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.308206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.308338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.308364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.308542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.308571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.308687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.308717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.308831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.308861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.308989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.309017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.309163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.309193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.309318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.309345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.309472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.309514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.309646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.309674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.309843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.309873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.310012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.310041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.310239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.310265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.310395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.310430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.310586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.310630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.310771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.310806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.310906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.310946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.311102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.311130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.311230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.311257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.311399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.561 [2024-07-25 19:07:31.311429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.561 qpair failed and we were unable to recover it. 00:34:19.561 [2024-07-25 19:07:31.311545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.311575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.311749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.311776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.311874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.311902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.312030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.312070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.312260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.312290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.312447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.312487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.312609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.312635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.312789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.312839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.312986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.313015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.313152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.313179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.313312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.313339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.313495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.313525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.313681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.313711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.313860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.313887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.314035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.314081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.314194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.314223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.314362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.314391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.314549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.314576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.314730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.314757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.314963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.314993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.315158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.315186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.315287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.315315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.315480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.315519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.315617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.315644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.315766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.315800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.315932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.315959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.316078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.316109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.316228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.316257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.316422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.316453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.316601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.316627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.316756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.316800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.316938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.316967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.317134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.317163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.317307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.317335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.317498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.317551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.317718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.317747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.317918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.317946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.318110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.318137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.318279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.318306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.318464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.318499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.318648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.318676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.318828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.318859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.319009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.319072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.319214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.319243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.319417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.319445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.562 qpair failed and we were unable to recover it. 00:34:19.562 [2024-07-25 19:07:31.319574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.562 [2024-07-25 19:07:31.319601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.319722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.319748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.319895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.319924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.320071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.320101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.320277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.320303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.320452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.320482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.320628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.320664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.320808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.320837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.320987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.321012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.321185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.321229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.321382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.321423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.321561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.321589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.321713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.321738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.321866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.321892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.322040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.322076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.322224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.322254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.322408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.322441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.322578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.322622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.322806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.322832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.322988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.323015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.323151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.323178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.323299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.323328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.323480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.323510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.323628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.323656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.323805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.323831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.323959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.323985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.324117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.324144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.324274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.324300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.324468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.324494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.324594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.324619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.324771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.324798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.324924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.324950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.325155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.563 [2024-07-25 19:07:31.325183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.563 qpair failed and we were unable to recover it. 00:34:19.563 [2024-07-25 19:07:31.325290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.325317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.325413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.325440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.325573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.325600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.325760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.325787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.325935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.325963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.326093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.326122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.326236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.326265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.326400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.326425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.326525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.326552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.326701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.326729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.326873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.326903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.327055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.327087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.327217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.327243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.327344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.327373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.327510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.327536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.327691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.327717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.327819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.327845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.327998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.328026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.328186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.328233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.328358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.328385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.328485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.328511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.328652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.328678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.328772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.328797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.328895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.328922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.329041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.329104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.329243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.329269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.329387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.329430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.329556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.329582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.329711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.329757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.329929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.329959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.330095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.330138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.330275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.330301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.330420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.330450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.330600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.330626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.330755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.330796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.330935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.330964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.331075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.331106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.564 [2024-07-25 19:07:31.331262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.564 [2024-07-25 19:07:31.331288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.564 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.331417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.331446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.331594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.331622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.331796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.331822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.331977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.332004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.332142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.332169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.332290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.332316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.332498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.332525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.332665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.332691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.332825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.332852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.332975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.333001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.333104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.333130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.333253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.333279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.333437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.333468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.333593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.333622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.333798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.333828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.333975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.334001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.334145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.334172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.334283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.334310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.334459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.334488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.334640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.334667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.334804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.334832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.335004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.335032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.335188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.335215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.335320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.335346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.335474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.335511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.335607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.335634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.335802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.335832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.335998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.336024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.336153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.336179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.336286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.336313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.336496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.336528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.336682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.336709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.336840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.336884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.337019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.337075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.337225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.337251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.337421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.337451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.337613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.337642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.565 qpair failed and we were unable to recover it. 00:34:19.565 [2024-07-25 19:07:31.337807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.565 [2024-07-25 19:07:31.337835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.337980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.338006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.338131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.338159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.338307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.338337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.338478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.338518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.338701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.338738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.338846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.338872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.339034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.339071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.339237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.339266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.339404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.339430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.339584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.339626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.339750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.339794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.339912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.339940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.340074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.340100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.340223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.340250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.340410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.340454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.340621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.340646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.340769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.340796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.340922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.340949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.341144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.341172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.341335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.341376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.341528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.341554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.341707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.341753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.341864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.341893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.342072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.342099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.342225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.342252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.342357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.342383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.342537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.342565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.342694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.342724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.342836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.342863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.343011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.343036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.343194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.343232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.343330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.343356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.343505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.343554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.343708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.343739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.343892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.343920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.344057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.344096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.566 qpair failed and we were unable to recover it. 00:34:19.566 [2024-07-25 19:07:31.344256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.566 [2024-07-25 19:07:31.344286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.344453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.344499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.344677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.344721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.344821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.344848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.345003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.345030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.345170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.345214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.345370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.345414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.345515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.345542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.345687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.345743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.345881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.345908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.346027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.346074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.346228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.346254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.346378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.346405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.346540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.346566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.346663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.346689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.346787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.346813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.346942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.346968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.347113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.347145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.347317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.347343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.347483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.347526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.347652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.347680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.347808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.347835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.347989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.348016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.348192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.348237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.348390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.348417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.348567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.348596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.348690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.348717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.348874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.348900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.349018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.349044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.349175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.349204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.567 [2024-07-25 19:07:31.349321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.567 [2024-07-25 19:07:31.349353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.567 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.349484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.349512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.349654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.349682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.349822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.349851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.350025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.350065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.350209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.350252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.350396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.350424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.350601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.350630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.350739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.350766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.350863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.350891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.351090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.351116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.351217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.351242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.351433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.351505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.351676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.351704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.351830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.351855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.352007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.352035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.352166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.352191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.352323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.352348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.352463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.352492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.352631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.352656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.352777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.352809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.352943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.352972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.353089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.353115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.353235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.353261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.353406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.353442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.353592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.353621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.353826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.353854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.353980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.354017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.354156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.354183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.354315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.354341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.354487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.354532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.354696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.354724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.354863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.354891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.355041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.355084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.355214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.355240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.355368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.355394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.355520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.355548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.568 [2024-07-25 19:07:31.355680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.568 [2024-07-25 19:07:31.355709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.568 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.355881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.355909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.356042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.356077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.356216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.356241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.356343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.356369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.356501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.356530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.356671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.356700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.356838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.356866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.357031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.357085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.357200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.357226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.357337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.357366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.357530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.357555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.357689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.357717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.357866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.357891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.358000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.358026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.358186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.358213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.358334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.358367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.358468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.358493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.358587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.358612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.358731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.358773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.358898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.358923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.359024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.359064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.359191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.359216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.359311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.359336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.359484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.359523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.359657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.359692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.359786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.359812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.359990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.360019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.360147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.360172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.569 [2024-07-25 19:07:31.360264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.569 [2024-07-25 19:07:31.360289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.569 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.360460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.360487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.360629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.360656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.360800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.360826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.360977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.361003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.361108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.361134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.361255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.361281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.361420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.361447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.361562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.361592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.361732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.361758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.361883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.361907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.362057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.362095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.362220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.362246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.362376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.362402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.362531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.362556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.362649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.362674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.362767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.362793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.362886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.362911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.363030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.363055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.363162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.363187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.363287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.363312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.363438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.363463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.363567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.363592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.363681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.363707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.363834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.363860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.363952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.363977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.364086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.364123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.364219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.364245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.364345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.364370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.364528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.364554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.364707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.364732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.364837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.364862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.364990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.365016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.365176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.365202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.365299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.365325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.365461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.365486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.570 [2024-07-25 19:07:31.365626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-07-25 19:07:31.365652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.570 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.365808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.365834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.365982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.366010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.366139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.366166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.366334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.366359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.366490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.366515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.366633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.366658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.366789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.366814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.366949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.366975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.367110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.367136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.367259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.367284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.367445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.367470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.367566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.367591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.367719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.367748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.367870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.367895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.368020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.856 [2024-07-25 19:07:31.368045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.856 qpair failed and we were unable to recover it. 00:34:19.856 [2024-07-25 19:07:31.368150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.368177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.368283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.368309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.368419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.368444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.368566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.368591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.368693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.368718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.368827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.368855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.369022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.369051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.369200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.369226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.369323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.369348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.369534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.369562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.369699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.369727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.369901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.369927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.370050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.370082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.370225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.370253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.370414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.370442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.370556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.370584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.370695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.370720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.370853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.370878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.371008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.371034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.371147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.371172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.371269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.371294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.371445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.371473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.371590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.371616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.371715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.371742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.371845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.371874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.372018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.372069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.372246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.372278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.372417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.372445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.372615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.372640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.372737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.372763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.372890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.372916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.373049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.373107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.373262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.373288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.373412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.373441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.373621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.373650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.373789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.373815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.373926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.857 [2024-07-25 19:07:31.373953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.857 qpair failed and we were unable to recover it. 00:34:19.857 [2024-07-25 19:07:31.374043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.374077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.374244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.374269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.374421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.374449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.374622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.374647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.374780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.374805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.374935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.374961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.375114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.375143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.375277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.375306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.375421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.375451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.375613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.375639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.375759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.375785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.375914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.375939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.376043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.376081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.376197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.376226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.376368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.376414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.376520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.376546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.376707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.376733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.376829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.376854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.376977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.377003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.377132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.377161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.377331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.377359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.377628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.377678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.377795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.377831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.377922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.377948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.378044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.378074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.378214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.378240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.378338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.378364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.378467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.378493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.378626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.378652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.378783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.378808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.378932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.378958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.379052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.379084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.379232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.379261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.379407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.379432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.379561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.379586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.379684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.858 [2024-07-25 19:07:31.379709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.858 qpair failed and we were unable to recover it. 00:34:19.858 [2024-07-25 19:07:31.379839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.379872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.379969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.379994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.380093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.380119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.380235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.380261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.380363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.380389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.380516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.380545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.380679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.380704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.380799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.380824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.380950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.380975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.381116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.381145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.381279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.381308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.381489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.381520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.381673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.381698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.381827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.381852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.381946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.381972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.382124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.382153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.382326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.382354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.382582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.382611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.382775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.382800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.382896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.382922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.383028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.383071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.383172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.383197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.383350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.383375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.383492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.383517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.383648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.383674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.383778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.383804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.383959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.383984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.384129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.384157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.384351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.384380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.384617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.384670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.384795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.384821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.384915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.384943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.385044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.385076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.385246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.385274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.385401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.385426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.385549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.859 [2024-07-25 19:07:31.385574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.859 qpair failed and we were unable to recover it. 00:34:19.859 [2024-07-25 19:07:31.385698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.385724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.385872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.385897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.386016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.386041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.386184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.386212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.386353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.386381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.386582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.386623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.386757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.386782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.386913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.386938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.387031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.387056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.387174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.387202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.387416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.387448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.387592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.387618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.387767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.387792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.387952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.387978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.388079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.388122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.388233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.388258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.388347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.388372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.388539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.388564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.388692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.388718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.388826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.388854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.388959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.388985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.389109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.389135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.389228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.389253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.389375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.389399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.389531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.389557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.389697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.389722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.389852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.389878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.389987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.390013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.390189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.390218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.390375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.390403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.390547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.390572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.390691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.390717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.390843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.390877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.390983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.391008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.391130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.391173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.391382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.391450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.860 [2024-07-25 19:07:31.391638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.860 [2024-07-25 19:07:31.391693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.860 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.391845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.391874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.392026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.392051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.392170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.392199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.392343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.392378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.392546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.392574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.392736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.392761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.392890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.392916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.393011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.393037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.393157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.393185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.393308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.393334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.393468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.393493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.393629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.393654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.393741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.393766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.393893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.393918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.394091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.394117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.394215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.394241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.394428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.394456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.394573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.394598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.394692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.394717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.394817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.394843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.394972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.394997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.395165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.395193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.395332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.395360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.395579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.395631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.395798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.395824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.395916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.395942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.396102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.396147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.396287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.396315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.396470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.861 [2024-07-25 19:07:31.396495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.861 qpair failed and we were unable to recover it. 00:34:19.861 [2024-07-25 19:07:31.396592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.396618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.396739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.396764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.396898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.396923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.397045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.397103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.397239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.397267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.397426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.397451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.397572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.397608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.397709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.397735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.397852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.397877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.397970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.397996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.398112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.398138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.398271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.398296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.398396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.398425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.398584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.398609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.398714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.398739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.398889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.398915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.399063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.399105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.399302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.399331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.399529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.399557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.399708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.399733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.399861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.399887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.400068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.400094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.400220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.400247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.400525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.400577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.400708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.400733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.400875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.400901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.401030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.401055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.401179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.401207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.401411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.401449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.401554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.401582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.401760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.401785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.401916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.401941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.402044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.402076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.402185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.402213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.402340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.402368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.402480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.402506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.402655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.862 [2024-07-25 19:07:31.402681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.862 qpair failed and we were unable to recover it. 00:34:19.862 [2024-07-25 19:07:31.402771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.402797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.402890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.402915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.403036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.403068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.403197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.403226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.403403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.403428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.403556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.403581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.403747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.403773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.403900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.403925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.404050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.404088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.404201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.404229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.404401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.404433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.404546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.404572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.404664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.404689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.404843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.404869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.405035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.405080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.405255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.405280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.405411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.405436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.405557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.405582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.405722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.405748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.405878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.405904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.406070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.406096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.406217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.406242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.406374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.406399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.406530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.406555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.406674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.406700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.406789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.406814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.406947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.406972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.407110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.407136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.407260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.407285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.407406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.407432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.407542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.407567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.407698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.407723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.407852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.407877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.407997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.408022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.408119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.408145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.408238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.863 [2024-07-25 19:07:31.408264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.863 qpair failed and we were unable to recover it. 00:34:19.863 [2024-07-25 19:07:31.408361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.408390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.408518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.408543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.408648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.408673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.408797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.408822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.408914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.408939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.409036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.409070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.409172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.409197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.409317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.409346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.409435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.409460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.409575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.409600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.409728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.409753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.409887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.409915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.410019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.410048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.410221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.410247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.410349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.410375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.410471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.410497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.410618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.410644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.410776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.410801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.410908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.410944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.411096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.411122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.411217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.411244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.411350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.411376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.411473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.411509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.411601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.411627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.411727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.411753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.411879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.411905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.412034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.412064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.412216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.412242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.412370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.412395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.412539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.412564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.412670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.412697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.412808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.412833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.412948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.412976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.413129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.413156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.413308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.413334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.413467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.864 [2024-07-25 19:07:31.413502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.864 qpair failed and we were unable to recover it. 00:34:19.864 [2024-07-25 19:07:31.413603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.413628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.413752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.413777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.413882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.413908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.414064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.414090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.414221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.414246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.414377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.414403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.414536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.414562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.414696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.414721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.414836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.414861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.414988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.415030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.415169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.415207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.415389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.415435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.415579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.415623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.415727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.415756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.415879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.415905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.416066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.416100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.416222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.416266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.416436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.416469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.416622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.416648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.416789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.416815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.416979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.417005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.417162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.417188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.417303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.417333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.417520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.417572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.417697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.417723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.417846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.417877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.418030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.418068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.418197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.418243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.418416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.418465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.418597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.418641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.418763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.418790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.418892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.418919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.419018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.419044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.419173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.419218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.419357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.419383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.419542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.419587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.419737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.865 [2024-07-25 19:07:31.419764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.865 qpair failed and we were unable to recover it. 00:34:19.865 [2024-07-25 19:07:31.419864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.419890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.420020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.420047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.420220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.420246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.420369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.420394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.420505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.420533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.420653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.420679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.420776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.420801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.420931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.420956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.421080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.421124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.421269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.421297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.421463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.421490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.421625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.421653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.421816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.421844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.421965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.421989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.422091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.422117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.422230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.422264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.422386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.422414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.422577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.422605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.422744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.422772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.422912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.422940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.423103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.423130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.423242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.423270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.423408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.423436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.423572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.423601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.423750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.423778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.423918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.423946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.424131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.424156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.424280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.424305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.424472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.424514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.424720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.424748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.424912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.424940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.866 [2024-07-25 19:07:31.425043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.866 [2024-07-25 19:07:31.425079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.866 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.425224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.425249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.425378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.425404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.425503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.425529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.425691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.425719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.425832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.425860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.425968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.425993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.426134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.426160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.426285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.426310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.426441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.426466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.426601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.426626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.426777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.426805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.426963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.426988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.427103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.427129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.427259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.427287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.427431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.427459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.427568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.427596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.427698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.427726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.427878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.427934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.428075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.428104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.428283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.428327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.428480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.428525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.428630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.428658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.428797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.428823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.428951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.428976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.429117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.429147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.429309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.429353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.429458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.429484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.429614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.429639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.429743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.429769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.429901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.429928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.430028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.430054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.430192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.430218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.430336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.430361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.430492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.430532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.430680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.867 [2024-07-25 19:07:31.430708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.867 qpair failed and we were unable to recover it. 00:34:19.867 [2024-07-25 19:07:31.430821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.430851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.430999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.431027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.431160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.431187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.431317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.431359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.431525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.431553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.431690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.431718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.431905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.431933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.432064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.432090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.432223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.432248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.432398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.432426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.432542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.432583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.432748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.432776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.432910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.432938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.433044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.433079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.433227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.433253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.433395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.433423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.433556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.433588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.433752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.433779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.433896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.433921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.434056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.434087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.434190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.434215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.434318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.434343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.434521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.434548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.434655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.434683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.434790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.434818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.434958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.434986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.435162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.435188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.435286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.435311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.435422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.435450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.435588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.435616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.435777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.435805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.435952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.435980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.436088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.436130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.436249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.436274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.436394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.436419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.436548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.436591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.868 [2024-07-25 19:07:31.436695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.868 [2024-07-25 19:07:31.436723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.868 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.436855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.436883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.437043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.437073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.437201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.437226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.437369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.437397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.437537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.437565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.437691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.437719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.437826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.437858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.437973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.438003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.438128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.438154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.438303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.438329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.438469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.438497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.438675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.438700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.438883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.438911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.439078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.439104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.439227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.439253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.439404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.439446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.439538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.439565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.439687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.439716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.439825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.439853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.439984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.440012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.440148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.440174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.440276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.440304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.440448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.440476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.440612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.440640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.440773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.440801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.440966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.440994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.441113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.441139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.441266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.441291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.441446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.441474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.441596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.441639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.441775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.441803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.441942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.441970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.442079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.442130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.442254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.442279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.442421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.442465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.869 qpair failed and we were unable to recover it. 00:34:19.869 [2024-07-25 19:07:31.442601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.869 [2024-07-25 19:07:31.442642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.442771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.442799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.442984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.443012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.443181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.443207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.443299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.443324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.443465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.443493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.443636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.443661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.443783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.443808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.443934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.443962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.444122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.444148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.444269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.444294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.444471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.444499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.444635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.444666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.444815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.444840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.444929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.444954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.445110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.445136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.445291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.445319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.445456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.445484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.445605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.445630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.445763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.445789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.445926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.445955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.446116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.446144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.446259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.446284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.446419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.446444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.446593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.446618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.446779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.446804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.446907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.446932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.447026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.447051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.447188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.447214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.447307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.447348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.447496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.447521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.447614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.447639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.447733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.447758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.447858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.447901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.448021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.448046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.448220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.448264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.448404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.870 [2024-07-25 19:07:31.448433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.870 qpair failed and we were unable to recover it. 00:34:19.870 [2024-07-25 19:07:31.448540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.448570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.448718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.448744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.448870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.448900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.449091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.449120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.449285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.449313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.449468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.449494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.449595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.449621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.449769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.449798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.449936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.449965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.450118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.450154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.450259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.450285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.450411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.450437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.450617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.450646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.450827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.450853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.450995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.451024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.451084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a30f0 (9): Bad file descriptor 00:34:19.871 [2024-07-25 19:07:31.451310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.451360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.451478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.451507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.451650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.451677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.451797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.451823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.451945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.451972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.452104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.452151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.452326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.452364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.452463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.452491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.452649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.452675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.452801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.452828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.452998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.453025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.453159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.453186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.453310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.453353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.453502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.453528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.453651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.453677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.453804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.871 [2024-07-25 19:07:31.453835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.871 qpair failed and we were unable to recover it. 00:34:19.871 [2024-07-25 19:07:31.453993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.454019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.454118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.454145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.454294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.454324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.454456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.454482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.454581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.454607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.454738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.454765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.454895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.454921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.455041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.455090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.455238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.455264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.455378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.455404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.455557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.455584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.455735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.455764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.455876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.455905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.456012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.456041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.456162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.456188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.456312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.456338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.456507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.456536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.456674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.456706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.456883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.456910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.457053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.457090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.457219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.457248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.457389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.457415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.457520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.457547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.457733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.457759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.457914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.457940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.458097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.458127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.458261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.458290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.458452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.458479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.458601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.458627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.458744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.458772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.458924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.458951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.459123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.459153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.459286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.459315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.459466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.459492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.459619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.459661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.872 [2024-07-25 19:07:31.459800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.872 [2024-07-25 19:07:31.459829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.872 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.459945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.459972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.460091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.460118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.460254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.460281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.460417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.460445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.460579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.460622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.460759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.460789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.460930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.460956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.461085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.461111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.461265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.461308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.461459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.461485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.461616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.461642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.461804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.461831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.461965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.461991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.462113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.462139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.462301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.462327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.462481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.462507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.462656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.462685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.462821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.462850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.463008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.463036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.463187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.463214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.463320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.463348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.463480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.463506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.463597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.463623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.463746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.463773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.463927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.463953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.464124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.464155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.464312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.464339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.464461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.464487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.464617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.464643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.464768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.464795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.464933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.464959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.465093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.465120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.465276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.465305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.465446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.465472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.465633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.465676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.873 [2024-07-25 19:07:31.465812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.873 [2024-07-25 19:07:31.465840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.873 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.465963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.465989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.466130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.466156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.466296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.466325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.466443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.466469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.466620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.466646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.466803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.466832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.466945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.466971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.467120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.467147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.467274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.467300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.467436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.467463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.467613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.467639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.467813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.467841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.468014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.468041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.468146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.468173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.468301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.468327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.468458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.468484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.468580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.468606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.468735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.468762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.468928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.468957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.469144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.469171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.469295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.469322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.469458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.469484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.469635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.469661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.469794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.469823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.469982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.470009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.470138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.470165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.470318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.470347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.470472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.470498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.470625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.470651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.470793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.470836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.470985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.471011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.471165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.471207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.471367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.874 [2024-07-25 19:07:31.471396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.874 qpair failed and we were unable to recover it. 00:34:19.874 [2024-07-25 19:07:31.471505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.471531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.471649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.471678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.471790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.471819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.471970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.471996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.472112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.472139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.472322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.472348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.472499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.472525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.472632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.472675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.472789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.472822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.472990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.473016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.473143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.473170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.473261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.473287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.473441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.473467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.473609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.473638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.473775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.473803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.473980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.474006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.474113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.474140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.474262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.474288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.474417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.474443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.474573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.474599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.474721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.474747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.474905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.474948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.475079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.475123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.475276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.475301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.475436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.475463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.475583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.475609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.475756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.475785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.475927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.475954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.476089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.476115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.476229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.476258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.476407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.476434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.476566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.476608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.476773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.476802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.476945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.476972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.477069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.477095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.875 [2024-07-25 19:07:31.477207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.875 [2024-07-25 19:07:31.477236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.875 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.477356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.477382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.477470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.477496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.477634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.477663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.477795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.477821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.477969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.478012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.478156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.478185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.478353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.478382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.478553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.478582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.478746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.478775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.478895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.478921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.479052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.479092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.479193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.479219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.479373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.479399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.479499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.479540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.479656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.479697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.479815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.479841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.479964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.479990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.480114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.480144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.480264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.480290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.480380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.480405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.480518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.480561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.480686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.480711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.480832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.480858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.481039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.481074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.481248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.481274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.481402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.481447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.481574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.481604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.481752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.481778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.481890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.481919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.482054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.482103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.482210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.876 [2024-07-25 19:07:31.482236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.876 qpair failed and we were unable to recover it. 00:34:19.876 [2024-07-25 19:07:31.482326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.482352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.482450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.482476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.482595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.482625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.482760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.482804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.482967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.482995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.483120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.483147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.483300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.483326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.483478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.483507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.483645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.483671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.483788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.483815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.483952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.483979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.484134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.484161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.484265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.484291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.484412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.484437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.484536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.484562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.484682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.484707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.484839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.484868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.485022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.485048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.485183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.485209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.485333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.485359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.485463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.485489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.485578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.485604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.485744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.485773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.485898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.485941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.486052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.486101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.486234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.486260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.486417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.486443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.486541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.486566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.486684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.486710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.486805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.486831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.486988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.487031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.487190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.487220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.487334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.487360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.487508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.487534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.487719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.487745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.487837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.487863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.487972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.877 [2024-07-25 19:07:31.487998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.877 qpair failed and we were unable to recover it. 00:34:19.877 [2024-07-25 19:07:31.488114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.488143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.488266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.488292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.488444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.488469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.488617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.488646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.488768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.488794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.488888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.488914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.489031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.489068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.489216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.489242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.489358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.489384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.489554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.489582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.489714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.489740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.489863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.489889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.489995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.490024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.490158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.490184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.490336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.490389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.490531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.490559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.490729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.490756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.490909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.490935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.491064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.491094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.491212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.491238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.491367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.491393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.491580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.491606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.491733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.491759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.491881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.491907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.492022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.492050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.492230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.492256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.492358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.492384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.492512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.492538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.492663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.492689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.492777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.492810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.492929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.492972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.493070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.493097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.493193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.493219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.493318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.493348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.493477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.493504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.493635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.878 [2024-07-25 19:07:31.493661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.878 qpair failed and we were unable to recover it. 00:34:19.878 [2024-07-25 19:07:31.493814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.493856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.494007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.494033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.494151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.494177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.494274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.494300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.494435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.494461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.494627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.494656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.494790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.494818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.494934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.494960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.495085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.495112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.495279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.495308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.495480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.495506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.495682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.495711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.495824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.495852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.495990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.496020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.496168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.496195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.496327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.496368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.496516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.496543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.496649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.496676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.496794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.496820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.496965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.496991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.497117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.497144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.497297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.497339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.497483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.497509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.497636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.497662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.497762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.497788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.497911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.497937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.498037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.498068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.498190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.498216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.498341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.498374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.498507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.498536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.498646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.498674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.498804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.498844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.498989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.499016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.499166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.499194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.499346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.499372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.499488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.499532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.499707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.879 [2024-07-25 19:07:31.499751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.879 qpair failed and we were unable to recover it. 00:34:19.879 [2024-07-25 19:07:31.499900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.499926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.500055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.500091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.500224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.500268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.500429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.500473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.500624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.500667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.500838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.500882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.500981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.501007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.501170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.501215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.501376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.501420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.501639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.501691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.501796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.501822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.501973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.501999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.502159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.502190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.502352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.502381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.502522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.502551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.502658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.502688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.502804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.502833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.502958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.502987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.503138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.503165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.503287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.503314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.503440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.503469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.503586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.503628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.503789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.503818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.503932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.503961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.504125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.504166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.504322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.504371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.504514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.504557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.504710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.504752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.504912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.504938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.505039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.505075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.880 qpair failed and we were unable to recover it. 00:34:19.880 [2024-07-25 19:07:31.505251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.880 [2024-07-25 19:07:31.505295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.505468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.505511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.505662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.505708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.505875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.505903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.506032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.506071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.506202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.506228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.506405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.506434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.506600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.506629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.506769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.506798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.506949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.506977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.507130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.507176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.507366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.507410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.507616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.507643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.507776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.507803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.507907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.507934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.508117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.508162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.508291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.508317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.508484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.508526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.508656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.508682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.508778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.508806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.508961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.508988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.509164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.509195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.509305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.509334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.509453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.509478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.509632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.509660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.509778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.509807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.509942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.509971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.510146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.510191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.510344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.510392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.510583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.510613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.510752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.510779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.881 qpair failed and we were unable to recover it. 00:34:19.881 [2024-07-25 19:07:31.510932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.881 [2024-07-25 19:07:31.510958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.511084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.511111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.511255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.511285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.511476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.511522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.511690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.511734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.511891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.511918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.512075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.512121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.512231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.512264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.512380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.512408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.512531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.512557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.512721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.512750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.512887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.512913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.513035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.513072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.513228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.513254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.513408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.513437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.513600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.513629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.513732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.513760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.513899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.513929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.514064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.514109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.514210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.514236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.514372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.514401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.514569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.514598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.514765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.514793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.514961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.514987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.515109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.515135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.515222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.515249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.515395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.515424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.515518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.515547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.515718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.515747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.515912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.515940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.516071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.516115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.516245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.516271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.516406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.516434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.516541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.516569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.882 qpair failed and we were unable to recover it. 00:34:19.882 [2024-07-25 19:07:31.516674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.882 [2024-07-25 19:07:31.516702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.516832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.516878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.517021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.517048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.517188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.517214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.517448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.517500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.517637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.517681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.517858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.517902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.518039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.518071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.518198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.518225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.518329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.518367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.518547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.518576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.518709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.518738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.518847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.518876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.519032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.519068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.519204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.519231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.519327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.519354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.519528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.519575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.519719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.519761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.519865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.519891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.520023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.520051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.520209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.520236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.520336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.520363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.520507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.520537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.520673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.520702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.520841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.520870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.521040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.521072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.521204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.521231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.521412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.521464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.521638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.521684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.521823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.521866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.522028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.522070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.522191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.522220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.522368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.522393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.522538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.522580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.522805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.522861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.522966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.522993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.523138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.523168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.883 qpair failed and we were unable to recover it. 00:34:19.883 [2024-07-25 19:07:31.523314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.883 [2024-07-25 19:07:31.523340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.523515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.523558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.523696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.523739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.523891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.523917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.524047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.524080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.524222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.524265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.524397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.524440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.524587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.524617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.524759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.524785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.524910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.524936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.525038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.525070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.525189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.525232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.525351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.525378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.525481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.525507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.525663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.525690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.525786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.525811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.525907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.525934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.526066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.526096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.526218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.526243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.526338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.526376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.526480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.526506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.526631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.526658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.526784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.526810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.526932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.526958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.527111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.527138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.527264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.527290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.527414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.527440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.527568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.527594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.527702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.527729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.527837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.527863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.527984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.528010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.528142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.528186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.528333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.528376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.528502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.528529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.528687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.528713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.884 qpair failed and we were unable to recover it. 00:34:19.884 [2024-07-25 19:07:31.528814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.884 [2024-07-25 19:07:31.528840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.528993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.529019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.529165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.529210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.529360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.529403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.529586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.529612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.529767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.529794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.529890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.529916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.530043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.530074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.530250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.530294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.530445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.530487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.530620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.530646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.530773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.530799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.530951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.530976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.531085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.531113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.531258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.531303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.531425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.531451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.531556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.531583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.531747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.531773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.531868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.531895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.532027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.532053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.532207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.532235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.532411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.532440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.532596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.532645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.532799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.532825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.532927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.532953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.533050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.533082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.533209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.533250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.533372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.533414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.533567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.533593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.533748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.533773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.533897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.533924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.534083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.534110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.885 qpair failed and we were unable to recover it. 00:34:19.885 [2024-07-25 19:07:31.534259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.885 [2024-07-25 19:07:31.534302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.534449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.534493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.534616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.534643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.534771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.534797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.534926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.534953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.535078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.535105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.535229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.535254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.535374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.535401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.535554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.535580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.535706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.535732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.535859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.535884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.536007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.536032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.536185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.536229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.536374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.536417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.536570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.536595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.536719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.536746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.536878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.536904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.537034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.537064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.537185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.537215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.537383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.537409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.537542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.537568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.537696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.537723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.537845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.537871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.537994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.538020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.538154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.538181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.538322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.538377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.538553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.538596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.538698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.538725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.538878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.538904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.539030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.539079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.539234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.539282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.539462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.539491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.539685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.539730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.539848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.539873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.539966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.539992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.886 qpair failed and we were unable to recover it. 00:34:19.886 [2024-07-25 19:07:31.540133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.886 [2024-07-25 19:07:31.540176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.540350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.540394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.540536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.540583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.540713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.540739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.540845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.540871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.541020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.541046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.541227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.541271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.541424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.541466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.541612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.541641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.541815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.541842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.541971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.541997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.542131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.542161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.542355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.542402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.542557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.542600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.542731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.542757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.542903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.542930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.543033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.543064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.543208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.543251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.543427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.543473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.543629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.543655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.543757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.543782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.543909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.543935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.544066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.544093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.544197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.544222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.544326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.544352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.544500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.544525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.544648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.544675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.544837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.544863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.545011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.545037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.545194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.545221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.545328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.545356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.545482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.545510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.545655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.545680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.545804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.545830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.545983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.887 [2024-07-25 19:07:31.546010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.887 qpair failed and we were unable to recover it. 00:34:19.887 [2024-07-25 19:07:31.546142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.546191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.546309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.546335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.546463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.546489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.546589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.546615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.546722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.546749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.546908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.546933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.547088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.547114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.547247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.547275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.547406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.547432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.547560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.547586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.547709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.547736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.547900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.547926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.548026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.548053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.548210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.548254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.548401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.548445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.548582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.548609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.548762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.548788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.548916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.548942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.549069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.549095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.549275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.549318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.549492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.549539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.549636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.549662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.549810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.549837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.549994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.550020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.550176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.550221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.550362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.550404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.550553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.550596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.550720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.550746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.550872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.550899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.551023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.551050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.551183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.551227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.551404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.551449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.551593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.551622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.551754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.551780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.551904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.551931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.552025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.552051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.552208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.888 [2024-07-25 19:07:31.552256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.888 qpair failed and we were unable to recover it. 00:34:19.888 [2024-07-25 19:07:31.552444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.552471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.552597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.552624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.552750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.552777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.552929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.552960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.553110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.553137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.553298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.553324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.553475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.553501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.553655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.553682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.553833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.553860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.553983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.554009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.554192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.554236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.554413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.554458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.554612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.554655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.554813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.554839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.554967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.554994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.555103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.555130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.555303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.555347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.555525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.555554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.555694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.555721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.555819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.555846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.555954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.555980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.556122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.556152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.556300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.556327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.556433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.556459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.556588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.556614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.556761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.556787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.556903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.556929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.557069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.557109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.557215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.557242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.557370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.557396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.557533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.557560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.557709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.557736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.557889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.557916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.558037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.558069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.558220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.558249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.558439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.558484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.889 [2024-07-25 19:07:31.558667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.889 [2024-07-25 19:07:31.558694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.889 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.558818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.558844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.558991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.559018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.559183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.559212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.559341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.559367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.559468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.559494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.559639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.559665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.559769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.559800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.559938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.559964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.560100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.560127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.560250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.560276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.560440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.560469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.560638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.560666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.560805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.560834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.560958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.560984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.561138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.561164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.561289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.561317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.561426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.561455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.561592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.561620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.561780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.561809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.561940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.561969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.562128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.562169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.562291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.562335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.562521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.562552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.562752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.562796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.562917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.562943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.563047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.563092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.563248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.563274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.563403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.563429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.563555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.890 [2024-07-25 19:07:31.563581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.890 qpair failed and we were unable to recover it. 00:34:19.890 [2024-07-25 19:07:31.563708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.563735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.563840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.563866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.564027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.564053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.564215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.564244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.564344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.564377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.564490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.564519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.564682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.564731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.564823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.564849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.564951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.564977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.565150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.565195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.565349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.565375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.565527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.565553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.565684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.565711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.565838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.565866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.566029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.566065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.566239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.566284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.566407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.566452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.566623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.566673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.566785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.566813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.566948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.566974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.567078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.567121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.567302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.567331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.567497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.567526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.567638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.567666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.567885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.567914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.568076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.568121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.568223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.568249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.568401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.568430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.568528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.568556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.568698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.568727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.568874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.568920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.891 [2024-07-25 19:07:31.569075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.891 [2024-07-25 19:07:31.569106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.891 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.569270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.569297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.569436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.569480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.569617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.569647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.569758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.569784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.569938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.569965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.570098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.570125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.570273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.570299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.570454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.570484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.570624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.570653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.570816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.570844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.571008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.571033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.571134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.571161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.571282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.571308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.571453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.571482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.571648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.571678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.571835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.571864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.572001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.572026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.572135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.572161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.572287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.572313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.572437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.572466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.572601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.572630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.572767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.572795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.572905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.572931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.573063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.573090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.573216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.573242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.573352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.573394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.573538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.573568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.573704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.573733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.573845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.573871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.574026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.574067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.574190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.574216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.574330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.574369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.574511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.574541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.574674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.574703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.574802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.574831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.892 [2024-07-25 19:07:31.574947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.892 [2024-07-25 19:07:31.574976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.892 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.575084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.575126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.575295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.575324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.575461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.575489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.575627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.575655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.575799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.575831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.575984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.576011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.576161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.576205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.576348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.576391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.576565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.576611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.576789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.576833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.576988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.577014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.577163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.577207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.577349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.577392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.577547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.577624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.577774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.577801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.577900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.577926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.578092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.578120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.578283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.578309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.578441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.578468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.578589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.578627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.578779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.578805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.578903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.578928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.579083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.579111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.579251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.579295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.579461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.579488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.579638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.579665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.579792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.579818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.579951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.579977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.580128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.580173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.580320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.580365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.580501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.580526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.580656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.580683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.893 [2024-07-25 19:07:31.580834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.893 [2024-07-25 19:07:31.580860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.893 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.580984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.581011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.581130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.581160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.581350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.581394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.581515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.581561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.581689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.581716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.581870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.581896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.582051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.582084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.582222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.582265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.582393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.582444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.582627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.582671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.582792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.582819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.582920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.582950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.583098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.583126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.583238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.583267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.583420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.583447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.583576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.583603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.583755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.583781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.583902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.583928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.584102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.584133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.584270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.584314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.584480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.584507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.584658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.584684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.584784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.584810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.584942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.584968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.585115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.585142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.585275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.585302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.585441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.585468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.585601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.585628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.585753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.585779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.585907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.585934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.586069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.586096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.586242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.586287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.586457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.586500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.586604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.586631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.586754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.586780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.586909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.586936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.587105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.894 [2024-07-25 19:07:31.587135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.894 qpair failed and we were unable to recover it. 00:34:19.894 [2024-07-25 19:07:31.587278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.587307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.587480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.587524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.587657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.587684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.587809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.587836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.587939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.587976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.588104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.588134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.588282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.588309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.588460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.588486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.588636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.588662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.588796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.588823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.588949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.588975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.589147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.589192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.589330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.589376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.589529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.589556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.589665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.589696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.589796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.589824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.589951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.589977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.590086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.590114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.590221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.590247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.590346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.590373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.590504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.590530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.590655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.590681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.590812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.590839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.590986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.591012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.591135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.591166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.591308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.591338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.591511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.591537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.591653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.591679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.591784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.591810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.591938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.591965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.895 qpair failed and we were unable to recover it. 00:34:19.895 [2024-07-25 19:07:31.592094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.895 [2024-07-25 19:07:31.592121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.592247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.592273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.592426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.592454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.592618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.592645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.592768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.592795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.592960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.592987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.593116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.593161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.593342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.593386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.593537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.593580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.593705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.593733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.593882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.593908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.594013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.594041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.594198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.594246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.594363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.594407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.594561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.594588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.594720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.594746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.594897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.594922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.595052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.595097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.595223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.595266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.595448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.595496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.595602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.595629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.595755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.595782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.595903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.595930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.596066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.596094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.596224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.596255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.596407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.596434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.596586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.596613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.596742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.596768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.596924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.596950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.597085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.597113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.597235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.597282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.597466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.597496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.597665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.597691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.597821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.597847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.597953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.597979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.598081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.598107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.598248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.598277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.598462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.896 [2024-07-25 19:07:31.598489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.896 qpair failed and we were unable to recover it. 00:34:19.896 [2024-07-25 19:07:31.598622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.598648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.598776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.598802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.598926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.598952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.599084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.599111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.599280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.599323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.599492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.599535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.599686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.599716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.599881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.599945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.600136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.600166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.600296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.600322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.600476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.600505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.600618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.600647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.600811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.600841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.600981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.601011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.601185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.601231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.601405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.601459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.601608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.601653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.601752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.601789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.601917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.601944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.602066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.602092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.602205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.602248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.602433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.602481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.602595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.602639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.602737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.602764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.602865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.602891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.603012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.603038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.603191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.603223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.603375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.603401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.603528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.603554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.603662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.603690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.603842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.603868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.603993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.604020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.604184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.604229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.604348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.604390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.604565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.604595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.604730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.604757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.604893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.604919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.605018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.605044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.897 [2024-07-25 19:07:31.605177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.897 [2024-07-25 19:07:31.605203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.897 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.605329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.605354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.605464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.605491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.605642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.605669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.605806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.605833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.605998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.606025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.606194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.606237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.606384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.606426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.606552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.606579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.606708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.606734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.606864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.606890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.606991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.607017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.607199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.607228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.607420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.607470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.607615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.607658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.607790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.607818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.607946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.607973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.608116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.608147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.608291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.608318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.608433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.608464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.608608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.608634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.608754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.608781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.608929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.608956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.609049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.609083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.609257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.609287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.609436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.609481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.609612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.609639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.609773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.609801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.609925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.609956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.610087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.610115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.610255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.610300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.610449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.610493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.610618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.610645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.610795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.610822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.610932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.898 [2024-07-25 19:07:31.610959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.898 qpair failed and we were unable to recover it. 00:34:19.898 [2024-07-25 19:07:31.611130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.611175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.611296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.611324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.611445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.611472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.611599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.611627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.611754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.611781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.611901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.611929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.612072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.612130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.612260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.612291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.612434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.612463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.612655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.612723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.612862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.612893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.613035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.613076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.613216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.613247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.613442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.613486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.613631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.613676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.613804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.613831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.613970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.613997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.614151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.614195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.614314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.614358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.614498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.614543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.614686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.614713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.614867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.614903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.614993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.615020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.615160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.615205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.615367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.615398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.615509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.615538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.615652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.615681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.615859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.615885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.616043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.616077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.616203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.616229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.616359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.616388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.616491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.616520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.616626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.616655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.616820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.616865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.616999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.617026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.617144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.617175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.617395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.617458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.617612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.617656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.617771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.899 [2024-07-25 19:07:31.617816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.899 qpair failed and we were unable to recover it. 00:34:19.899 [2024-07-25 19:07:31.617945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.617971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.618119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.618165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.618311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.618341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.618505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.618536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.618675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.618704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.618870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.618899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.619021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.619048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.619203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.619229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.619385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.619415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.619573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.619602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.619738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.619767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.619906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.619935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.620076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.620103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.620234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.620260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.620412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.620442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.620557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.620599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.620723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.620753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.620895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.620925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.621036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.621068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.621196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.621222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.621390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.621420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.621540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.621584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.621735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.621764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.621902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.621931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.622034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.622074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.622249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.622275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.622419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.622447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.622646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.622675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.900 [2024-07-25 19:07:31.622837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.900 [2024-07-25 19:07:31.622867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.900 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.623002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.623030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.623168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.623196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.623315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.623341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.623485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.623514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.623677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.623707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.623869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.623898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.624030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.624071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.624220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.624246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.624345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.624371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.624513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.624542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.624709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.624737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.624900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.624929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.625064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.625093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.625263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.625289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.625441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.625469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.625599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.625627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.625785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.625814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.625934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.625960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.626064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.626091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.626247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.626273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.626377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.626428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.626551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.626580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.626704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.626746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.626856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.626885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.627035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.627077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.627205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.627231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.627331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.627364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.627535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.627564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.627701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.627727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.627831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.627856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.627997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.628026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.628177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.628204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.628373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.628402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.628530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.901 [2024-07-25 19:07:31.628563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.901 qpair failed and we were unable to recover it. 00:34:19.901 [2024-07-25 19:07:31.628716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.628743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.628913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.628942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.629057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.629106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.629227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.629253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.629378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.629404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.629584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.629613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.629789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.629815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.629935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.629962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.630085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.630123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.630313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.630339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.630519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.630547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.630691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.630720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.630874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.630900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.631040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.631106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.631258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.631286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.631432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.631458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.631577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.631603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.631744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.631773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.631929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.631955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.632055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.632085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.632214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.632241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.632363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.632389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.632517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.632559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.632675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.632705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.632829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.632855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.632959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.632985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.633140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.633167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.633271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.633297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.633383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.633409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.633611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.633637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.633756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.633782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.633892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.633931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.634034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.634070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.634173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.634200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.634337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.634375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.634513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.634539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.634682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.634721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.634875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.902 [2024-07-25 19:07:31.634905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.902 qpair failed and we were unable to recover it. 00:34:19.902 [2024-07-25 19:07:31.635050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.635108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.635216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.635243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.635378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.635408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.635569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.635596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.635737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.635780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.635911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.635939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.636108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.636138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.636272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.636302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.636414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.636441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.636594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.636621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.636759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.636787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.636895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.636923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.637054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.637085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.637227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.637253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.637372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.637398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.637550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.637580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.637703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.637749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.637849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.637878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.638033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.638065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.638188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.638214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.638327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.638357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.638491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.638519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.638653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.638682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.638883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.638909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.639028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.639055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.639190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.639216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.639353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.639382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.639545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.639574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.639703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.639732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.639860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.639889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.640046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.640078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.640200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.640225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.640340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.640370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.640536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.640581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.640740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.640769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.640937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.640965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.641084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.641111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.903 qpair failed and we were unable to recover it. 00:34:19.903 [2024-07-25 19:07:31.641236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.903 [2024-07-25 19:07:31.641263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.641483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.641544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.641655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.641685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.641848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.641877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.642020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.642066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.642193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.642223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.642395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.642425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.642550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.642618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.642782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.642811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.642939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.642967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.643135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.643175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.643311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.643339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.643487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.643516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.643680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.643724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.643858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.643901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.644034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.644068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.644207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.644234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.644340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.644366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.644478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.644504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.644631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.644659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.644803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.644831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.645009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.645039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.645171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.645198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.645325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.645380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.645556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.645585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.645763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.645789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.645940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.645966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.646101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.646128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.646272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.646301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.646454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.646480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.646611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.646646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.646746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.646772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.646906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.646932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.647036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.647068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.647196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.647222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.647384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.647423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.647544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.647570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.647674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.647700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.647827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.647853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.904 qpair failed and we were unable to recover it. 00:34:19.904 [2024-07-25 19:07:31.648006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.904 [2024-07-25 19:07:31.648032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.648155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.648183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.648354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.648382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.648557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.648586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.648730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.648756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.648896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.648921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.649049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.649085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.649249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.649277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.649451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.649479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.649637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.649663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.649786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.649811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.649933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.649959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.650143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.650172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.650339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.650376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.650518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.650543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.650707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.650733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.650869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.650894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.651044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.651076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.651257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.651286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.651463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.651491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.651643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.651669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.651791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.651816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.651926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.651952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.652084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.652110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.652236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.652261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.652406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.652432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.652533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.652559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.652658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.652684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.652812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.652838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.652961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.652987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.653090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.653117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.653216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.653241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.653347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.653373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.905 [2024-07-25 19:07:31.653532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.905 [2024-07-25 19:07:31.653571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.905 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.653710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.653739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.653871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.653898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.654023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.654068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.654249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.654294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.654448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.654492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.654606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.654648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.654743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.654769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.654905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.654931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.655034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.655067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.655234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.655259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.655359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.655384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.655564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.655608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.655746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.655779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.655943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.655972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.656128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.656156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.656322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.656350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.656484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.656512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.656618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.656645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.656823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.656851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.657000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.657027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.657165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.657194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.657303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.657329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.657456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.657499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.657685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.657728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.657855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.657880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.657987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.658012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.658147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.906 [2024-07-25 19:07:31.658174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.906 qpair failed and we were unable to recover it. 00:34:19.906 [2024-07-25 19:07:31.658270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.658312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.658452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.658480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.658578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.658607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.658747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.658775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.658948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.658976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.659135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.659163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.659301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.659349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.659530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.659573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.659718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.659761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.659893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.659918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.660051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.660082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.660236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.660278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.660458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.660505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.660678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.660725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.660851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.660877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.661004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.661029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.661179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.661222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.661332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.661367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.661545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.661574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.661688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.661717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.661825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.661854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.662021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.662057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.662198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.662225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.662358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.662400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.662576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.662606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.662767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.662801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.662950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.662976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.663100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.663127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.663256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.663282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.663398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.663427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.663572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.663600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.907 qpair failed and we were unable to recover it. 00:34:19.907 [2024-07-25 19:07:31.663764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.907 [2024-07-25 19:07:31.663792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.663908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.663936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.664091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.664118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.664256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.664300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.664428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.664456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.664584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.664612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.664749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.664773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.664920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.664947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.665083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.665109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.665254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.665296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.665449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.665492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.665636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.665680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.665807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.665832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.665957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.665983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.666151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.666197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.666307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.666349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.666474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.666500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.666631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.666656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.666786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.666811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.666914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.666939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.667098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.667124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.667262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.667301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.667403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.667430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.667564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.667590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.667687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.667714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.667812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.667838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.667967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.667993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.668163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.668192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.668291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.668320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.668470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.668499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.668657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.668702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.668875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.668921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.669049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.669086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.669267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.669313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.669424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.908 [2024-07-25 19:07:31.669451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.908 qpair failed and we were unable to recover it. 00:34:19.908 [2024-07-25 19:07:31.669619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.669663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.669806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.669851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.670003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.670028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.670185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.670215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.670408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.670455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.670589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.670632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.670749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.670792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.670891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.670918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.671029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.671076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.671239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.671268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.671414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.671442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.671575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.671603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.671715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.671745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.671915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.671944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.672116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.672144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.672291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.672335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.672490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.672534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.672708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.672751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.672904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.672930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.673025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.673051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.673237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.673280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.673425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.673472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.673613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.673661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.673757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.673782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.673909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.673934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.674055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.674118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.674264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.674300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.674410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.674439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.674605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.674633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.674740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.674768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.674947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.674975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.675137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.675181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.909 [2024-07-25 19:07:31.675370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.909 [2024-07-25 19:07:31.675413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.909 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.675594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.675640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.675812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.675855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.676005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.676030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.676209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.676251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.676368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.676412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.676551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.676594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.676745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.676788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.676917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.676942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.677107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.677137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.677298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.677351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.677536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.677562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.677689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.677715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.677841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.677866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.677966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.677992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.678125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.678152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.678285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.678310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.678465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.678490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.678615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.678640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.678764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.678789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.678939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.678964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.679086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.679125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.679287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.679314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.679438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.679464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.679561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.679587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.679685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.679710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.679849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.679887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.679998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.680025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.680143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.680169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.680286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.680314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.910 qpair failed and we were unable to recover it. 00:34:19.910 [2024-07-25 19:07:31.680451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.910 [2024-07-25 19:07:31.680479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.680590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.680619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.680782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.680810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.680920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.680946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.681050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.681086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.681249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.681274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.681430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.681458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.681605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.681633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.681799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.681827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.681937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.681962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.682090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.682116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.682241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.682266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.682369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.682394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.682515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.682541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.682705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.682733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.682857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.682898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.683029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.683057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.683207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.683232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.683358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.683388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.683508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.683550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.683693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.683721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.683835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.683860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.684008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.684036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.684194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.684220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.684348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.684374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.684499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.684542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.684720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.684746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.684903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.684928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.685053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.685088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.685214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.685239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.685365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.685390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.685550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.685576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.685697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.685722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.685875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.911 [2024-07-25 19:07:31.685932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.911 qpair failed and we were unable to recover it. 00:34:19.911 [2024-07-25 19:07:31.686110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.686138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.686267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.686292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.686422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.686447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.686595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.686638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.686788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.686831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.686931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.686956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.687086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.687113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.687259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.687303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.687448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.687490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.687611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.687653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.687782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.687807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.687940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.687968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.688146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.688175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.688312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.688340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.688500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.688528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.688640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.688668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.688810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.688838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.688964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.688991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.689152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.689197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.689312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.689341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.689518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.689561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.689736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.689783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.689903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.689929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.690103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.690131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.690272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.690315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.690463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.690491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.690679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.690722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.690873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.690898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.691045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.691079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.691237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.691263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.691447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.691488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.691633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.691661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.691777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.691802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.691934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.691960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.692055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.692088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.692215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.692240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.692338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.692363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.912 qpair failed and we were unable to recover it. 00:34:19.912 [2024-07-25 19:07:31.692487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.912 [2024-07-25 19:07:31.692513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.692613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.692640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.692761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.692787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.692892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.692918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.693019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.693045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.693213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.693239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.693365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.693390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.693509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.693535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.693687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.693712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.693839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.693864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.694016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.694042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.694222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.694265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.694412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.694454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.694594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.694636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.694764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.694795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.694924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.694950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.695050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.695083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.695235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.695279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.695434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.695477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.695636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.695661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.695825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.695851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.695975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.696000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.696108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.696137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.696302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.696332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.696470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.696513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.696643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.696669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.696767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.696793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.696915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.696941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.697094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.697121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.697274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.697300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.697405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.697430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.697595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.697633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.697793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.697820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.697945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.697971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.698094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.698120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.698215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.698240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.698355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.698383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.698481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.913 [2024-07-25 19:07:31.698509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.913 qpair failed and we were unable to recover it. 00:34:19.913 [2024-07-25 19:07:31.698638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.698666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.698824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.698853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.698992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.699016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.699178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.699209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.699345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.699373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.699536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.699564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.699694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.699722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.699876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.699901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.700049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.700080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.700220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.700246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.700389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.700417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.700531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.700559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.700667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.700695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.700832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.700860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.701048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.701098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.701211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.701240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.701392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.701435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.701584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.701628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.701751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.701794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.701914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.701939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.702044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.702076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.702223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.702252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.702383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.702413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.702555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.702580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.702736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.702761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.702886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.702912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.703019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.703046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.703183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.703209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.703340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.703365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.703483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.703511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.703659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.703688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.703813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.703838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.703939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.703966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.704146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.704191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.704337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.704380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.704481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.704506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.704610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.914 [2024-07-25 19:07:31.704636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.914 qpair failed and we were unable to recover it. 00:34:19.914 [2024-07-25 19:07:31.704738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.704764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.704896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.704922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.705029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.705054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.705178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.705203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.705317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.705343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.705462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.705491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.705659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.705687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.705836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.705865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.706035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.706070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:19.915 [2024-07-25 19:07:31.706210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.915 [2024-07-25 19:07:31.706236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:19.915 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.706407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.706436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.706605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.706633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.706753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.706782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.706943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.706972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.707124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.707158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.707256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.707281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.707429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.707458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.707622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.707670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.707846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.707889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.707986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.708011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.708172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.708223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.708363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.708407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.708584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.708629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.708731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.708757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.708866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.708891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.709020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.709047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.709188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.709217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.709323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.709351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.206 [2024-07-25 19:07:31.709484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.206 [2024-07-25 19:07:31.709512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.206 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.709643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.709672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.709837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.709866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.710003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.710029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.710139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.710165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.710290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.710315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.710505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.710533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.710661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.710690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.710797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.710824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.710945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.710971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.711086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.711113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.711236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.711262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.711409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.711437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.711581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.711609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.711742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.711770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.711907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.711935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.712069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.712112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.712210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.712235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.712382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.712421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.712600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.712650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.712797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.712841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.712945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.712971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.713103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.713139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.713280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.713323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.713477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.713521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.713693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.713735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.713858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.713883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.714011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.714037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.714237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.714280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.714433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.714462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.714615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.714646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.714793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.714820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.714943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.714971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.715151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.715181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.715352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.715381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.715544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.715572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.207 [2024-07-25 19:07:31.715728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.207 [2024-07-25 19:07:31.715754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.207 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.715885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.715911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.716040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.716075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.716212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.716242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.716443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.716472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.716641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.716669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.716785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.716811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.716972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.716998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.717170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.717200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.717398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.717427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.717591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.717620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.717738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.717765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.717902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.717928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.718024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.718050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.718157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.718183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.718314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.718340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.718463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.718489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.718582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.718608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.718733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.718759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.718883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.718908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.719069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.719113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.719317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.719346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.719514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.719543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.719710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.719740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.719874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.719900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.720057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.720104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.720241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.720269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.720481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.720510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.720628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.720654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.720753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.720780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.720909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.720935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.721103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.721133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.721305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.721333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.208 [2024-07-25 19:07:31.721506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.208 [2024-07-25 19:07:31.721532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.208 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.721680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.721706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.721804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.721831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.721987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.722013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.722166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.722195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.722402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.722431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.722607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.722636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.722754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.722780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.722904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.722930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.723055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.723088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.723239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.723265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.723365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.723391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.723497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.723523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.723648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.723675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.723768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.723794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.723950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.723976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.724082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.724126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.724267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.724293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.724424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.724449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.724545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.724571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.724695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.724720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.724854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.724880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.725001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.725027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.725169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.725196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.725321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.725347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.725470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.725496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.725647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.725672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.725826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.725852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.725994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.726023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.726160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.726198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.726339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.726372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.726474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.726500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.209 [2024-07-25 19:07:31.726650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.209 [2024-07-25 19:07:31.726676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.209 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.726774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.726799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.726927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.726952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.727098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.727124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.727280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.727305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.727433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.727459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.727562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.727589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.727720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.727746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.727908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.727934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.728063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.728089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.728217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.728243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.728367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.728392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.728528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.728555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.728684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.728710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.728820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.728849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.729012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.729037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.729202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.729228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.729325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.729351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.729470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.729496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.729602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.729629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.729753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.729779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.729944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.729988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.730179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.730212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.730321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.730348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.730474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.730499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.730732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.730788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.730929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.730955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.731081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.731108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.731243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.731270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.731381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.731408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.731534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.731560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.731687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.731713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.731865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.210 [2024-07-25 19:07:31.731891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.210 qpair failed and we were unable to recover it. 00:34:20.210 [2024-07-25 19:07:31.732037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.732067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.732196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.732226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.732406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.732432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.732580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.732606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.732730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.732756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.732849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.732879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.733001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.733028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.733190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.733220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.733425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.733454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.733691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.733743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.733919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.733944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.734072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.734117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.734300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.734330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.734525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.734554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.734722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.734748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.734878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.734906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.735011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.735038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.735223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.735251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.735418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.735446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.735660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.735708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.735853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.735878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.735979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.736006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.736182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.736211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.736435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.736464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.736598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.736628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.736754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.736781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.736911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.736937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.737068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.737113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.211 [2024-07-25 19:07:31.737291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.211 [2024-07-25 19:07:31.737319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.211 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.737449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.737477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.737645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.737671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.737797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.737822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.737930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.737956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.738123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.738152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.738356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.738384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.738525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.738550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.738668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.738693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.738838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.738863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.738991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.739016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.739182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.739212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.739377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.739405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.739548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.739573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.739727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.739752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.739849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.739874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.740010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.740035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.740185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.740221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.740366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.740396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.740593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.740622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.740765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.740791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.740882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.740908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.741034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.741065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.741194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.741223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.741361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.741390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.741587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.741615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.741753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.741778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.741902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.741927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.742069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.742095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.742220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.742245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.742367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.742392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.742500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.742526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.742617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.212 [2024-07-25 19:07:31.742642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.212 qpair failed and we were unable to recover it. 00:34:20.212 [2024-07-25 19:07:31.742795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.742821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.742964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.742992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.743141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.743167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.743268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.743294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.743387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.743412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.743515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.743541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.743700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.743725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.743850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.743876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.744001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.744027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.744160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.744187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.744281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.744306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.744475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.744514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.744651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.744679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.744769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.744795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.744947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.744973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.745105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.745132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.745230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.745256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.745357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.745384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.745515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.745541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.745634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.745660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.745785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.745812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.745946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.745972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.746120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.746159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.746294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.746321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.746418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.746447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.746610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.746635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.746765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.746791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.746967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.746996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.747143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.747170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.747300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.747326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.747419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.747445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.747564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.747590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.747711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.747737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.747863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.747889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.748039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.748071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.213 qpair failed and we were unable to recover it. 00:34:20.213 [2024-07-25 19:07:31.748173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.213 [2024-07-25 19:07:31.748198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.748357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.748383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.748508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.748535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.748667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.748693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.748848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.748874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.748995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.749025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.749216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.749255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.749362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.749389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.749519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.749545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.749666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.749691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.749844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.749869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.749965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.749991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.750095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.750124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.750282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.750308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.750460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.750487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.750589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.750615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.750737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.750767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.750903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.750929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.751036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.751069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.751222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.751248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.751382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.751407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.751501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.751526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.751623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.751648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.751777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.751802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.751954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.751980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.752104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.752129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.752257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.752283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.752407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.752432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.752534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.752559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.752682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.752708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.752817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.752845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.752999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.753025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.753157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.753184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.753314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.753340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.753472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.753498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.753622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.214 [2024-07-25 19:07:31.753648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.214 qpair failed and we were unable to recover it. 00:34:20.214 [2024-07-25 19:07:31.753776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.753802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.753958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.753983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.754112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.754139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.754242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.754268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.754393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.754419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.754546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.754571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.754695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.754721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.754829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.754856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.754966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.754994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.755114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.755141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.755234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.755259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.755360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.755387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.755509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.755535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.755693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.755718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.755882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.755921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.756036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.756083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.756245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.756272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.756401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.756427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.756532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.756558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.756679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.756704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.756803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.756830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.756931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.756957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.757124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.757151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.757271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.757297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.757423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.757448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.757583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.757608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.757764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.757790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.757922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.757963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.758082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.758108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.758237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.758262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.758419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.758444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.758566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.758591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.758737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.758764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.758853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.758879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.759037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.759075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.215 qpair failed and we were unable to recover it. 00:34:20.215 [2024-07-25 19:07:31.759207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.215 [2024-07-25 19:07:31.759234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.759360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.759386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.759490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.759516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.759647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.759672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.759765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.759790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.759922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.759948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.760066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.760094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.760244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.760282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.760503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.760530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.760685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.760710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.760842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.760868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.761022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.761047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.761182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.761213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.761349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.761375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.761504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.761531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.761629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.761656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.761813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.761838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.761953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.761982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.762150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.762176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.762304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.762330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.762465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.762491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.762613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.762639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.762735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.762761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.762852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.762877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.762994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.763020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.763165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.763205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.763311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.763338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.763491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.763516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.763614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.763639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.763771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.216 [2024-07-25 19:07:31.763796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.216 qpair failed and we were unable to recover it. 00:34:20.216 [2024-07-25 19:07:31.763921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.763946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.764093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.764134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.764238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.764263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.764393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.764421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.764545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.764570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.764672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.764697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.764790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.764815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.764940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.764966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.765085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.765111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.765206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.765235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.765324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.765350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.765475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.765501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.765626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.765652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.765748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.765773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.765895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.765920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.766014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.766040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.766163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.766189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.766328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.766353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.766457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.766482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.766574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.766599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.766752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.766776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.766905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.766930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.767026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.767052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.767153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.767179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.767277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.767303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.767424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.767449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.767551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.767576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.767707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.767732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.767862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.767890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.768056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.768089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.768230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.768255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.768354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.768379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.768472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.768497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.768591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.768616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.768751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.768776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.768876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.768901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.768995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.217 [2024-07-25 19:07:31.769028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.217 qpair failed and we were unable to recover it. 00:34:20.217 [2024-07-25 19:07:31.769163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.769189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.769288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.769315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.769417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.769443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.769603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.769628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.769762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.769787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.769887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.769914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.770049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.770089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.770199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.770225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.770319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.770345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.770480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.770506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.770659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.770685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.770784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.770809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.770956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.770981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.771089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.771115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.771240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.771266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.771414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.771440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.771594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.771619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.771723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.771749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.771902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.771928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.772099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.772125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.772248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.772273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.772398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.772423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.772578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.772604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.772729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.772755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.772874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.772898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.773010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.773035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.773177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.773216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.773331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.773358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.773500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.773527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.773678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.773703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.773804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.773848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.773995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.774022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.774157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.774184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.774285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.774310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.774440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.774465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.774603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.774628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.774727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.774752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.218 qpair failed and we were unable to recover it. 00:34:20.218 [2024-07-25 19:07:31.774857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.218 [2024-07-25 19:07:31.774882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.775010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.775035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.775143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.775170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.775315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.775348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.775476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.775503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.775664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.775709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.775805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.775831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.775931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.775958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.776143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.776173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.776317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.776347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.776464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.776492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.776627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.776655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.776763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.776791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.776904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.776929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.777103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.777132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.777246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.777275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.777376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.777405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.777521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.777549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.777763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.777791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.777967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.777995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.778159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.778186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.778296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.778324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.778489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.778532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.778680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.778724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.778877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.778903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.779067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.779094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.779225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.779250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.779430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.779458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.779595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.779623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.779761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.779789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.779945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.779974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.780132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.780172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.780327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.780375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.780518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.780561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.219 qpair failed and we were unable to recover it. 00:34:20.219 [2024-07-25 19:07:31.780678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.219 [2024-07-25 19:07:31.780707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.780878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.780904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.781040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.781085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.781194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.781222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.781369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.781397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.781517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.781559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.781688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.781716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.781886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.781914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.782065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.782104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.782262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.782289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.782483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.782528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.782699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.782740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.782871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.782899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.783069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.783095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.783203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.783230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.783390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.783418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.783620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.783662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.783800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.783843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.783998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.784023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.784165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.784191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.784319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.784346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.784491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.784517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.784669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.784695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.784826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.784851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.784947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.784972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.785084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.785110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.785278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.785320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.785431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.785473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.785585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.785628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.785717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.785742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.785892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.785918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.786021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.786046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.786165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.786203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.786349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.786387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.786556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.786583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.786751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.786779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.786942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.786975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.220 [2024-07-25 19:07:31.787144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.220 [2024-07-25 19:07:31.787173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.220 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.787335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.787380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.787544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.787571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.787697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.787738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.787861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.787886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.788048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.788079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.788206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.788252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.788369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.788397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.788597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.788641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.788770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.788795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.788945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.788970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.789102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.789131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.789321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.789365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.789508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.789550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.789696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.789724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.789848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.789874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.790012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.790051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.790218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.790248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.790384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.790412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.790547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.790575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.790688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.790718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.790862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.790891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.791040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.791073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.791172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.791198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.791344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.791371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.221 [2024-07-25 19:07:31.791516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.221 [2024-07-25 19:07:31.791544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.221 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.791711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.791740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.791878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.791906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.792020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.792045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.792175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.792201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.792353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.792393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.792516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.792559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.792724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.792753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.792868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.792896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.793047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.793078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.793201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.793227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.793328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.793354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.793500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.793529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.793666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.793695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.793806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.793839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.793975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.794014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.794157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.794183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.794340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.794383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.794508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.794553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.794679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.794705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.794831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.794856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.794980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.795006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.795126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.795152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.795293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.795318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.795445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.795470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.795615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.795656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.795787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.795811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.795942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.795970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.796112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.796154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.796303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.796333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.796458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.796483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.796635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.796661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.796814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.796842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.796986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.797011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.797140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.222 [2024-07-25 19:07:31.797166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.222 qpair failed and we were unable to recover it. 00:34:20.222 [2024-07-25 19:07:31.797280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.797309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.797484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.797512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.797628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.797670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.797810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.797839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.798006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.798031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.798127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.798153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.798281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.798315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.798451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.798479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.798590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.798618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.798723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.798750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.798880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.798908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.799030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.799081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.799266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.799293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.799440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.799471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.799658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.799702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.799832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.799859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.799984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.800009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.800149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.800179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.800320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.800349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.800457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.800485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.800663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.800692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.800835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.800863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.801005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.801031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.801160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.801186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.801360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.801389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.801601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.801630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.801833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.801861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.802026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.802054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.802218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.802243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.802367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.802393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.802561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.802589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.802720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.802748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.802909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.802937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.803069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.803099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.803204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.803229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.803380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.803406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.223 qpair failed and we were unable to recover it. 00:34:20.223 [2024-07-25 19:07:31.803548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.223 [2024-07-25 19:07:31.803576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.803674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.803702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.803868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.803896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.804031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.804066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.804286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.804311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.804513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.804542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.804643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.804671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.804834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.804862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.805008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.805033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.805175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.805201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.805303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.805328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.805477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.805506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.805643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.805671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.805779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.805819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.805939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.805968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.806149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.806175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.806274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.806300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.806449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.806474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.806583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.806608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.806724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.806752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.806890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.806918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.807041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.807073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.807174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.807201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.807334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.807362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.807495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.807528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.807635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.807663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.807809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.807866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.807966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.807992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.808095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.808122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.808214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.808240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.808394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.808420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.808576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.808601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.808741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.808787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.808890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.808915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.809016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.809053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.809179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.809208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.224 qpair failed and we were unable to recover it. 00:34:20.224 [2024-07-25 19:07:31.809403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.224 [2024-07-25 19:07:31.809446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.809591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.809636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.809770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.809795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.809892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.809917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.810044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.810082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.810204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.810247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.810395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.810439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.810589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.810614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.810714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.810739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.810900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.810926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.811075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.811111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.811234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.811260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.811371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.811401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.811545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.811570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.811722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.811747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.811875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.811914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.812017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.812042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.812160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.812189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.812388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.812416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.812551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.812575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.812672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.812698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.812826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.812852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.812982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.813007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.813187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.813232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.813387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.813431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.813548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.813591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.813717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.813742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.813876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.813902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.813999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.814024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.814207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.814250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.814384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.814414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.814551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.814580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.814727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.814752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.814875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.814901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.815023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.815048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.815210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.815238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.225 [2024-07-25 19:07:31.815374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.225 [2024-07-25 19:07:31.815402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.225 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.815506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.815534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.815637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.815665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.815820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.815846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.815977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.816002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.816094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.816120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.816270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.816309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.816467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.816498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.816669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.816698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.816846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.816872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.817040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.817071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.817174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.817199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.817298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.817323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.817478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.817503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.817639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.817664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.817755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.817780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.817899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.817925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.818056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.818118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.818312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.818340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.818454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.818479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.818635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.818660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.818810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.818835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.818964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.818989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.819188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.819217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.819386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.819411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.819514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.819539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.819668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.819693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.819831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.819871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.819976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.820003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.820159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.820189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.820329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.820359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.820498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.820528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.226 qpair failed and we were unable to recover it. 00:34:20.226 [2024-07-25 19:07:31.820674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.226 [2024-07-25 19:07:31.820700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.820854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.820884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.820995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.821020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.821165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.821191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.821334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.821362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.821495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.821523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.821650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.821676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.821782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.821808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.821897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.821923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.822017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.822042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.822204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.822229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.822325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.822350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.822506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.822531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.822629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.822654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.822750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.822775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.822919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.822944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.823073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.823099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.823223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.823248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.823354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.823379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.823474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.823499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.823596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.823621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.823745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.823770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.823870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.823895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.824016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.824041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.824164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.824205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.824356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.824381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.824533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.824561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.824718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.824743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.824870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.824899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.824989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.825014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.825163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.825191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.825358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.825383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.825480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.825505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.825626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.825667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.825817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.825842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.825977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.826003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.227 [2024-07-25 19:07:31.826159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.227 [2024-07-25 19:07:31.826187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.227 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.826362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.826405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.826531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.826557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.826663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.826688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.826841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.826867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.826983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.827009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.827164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.827211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.827339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.827366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.827486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.827516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.827665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.827691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.827789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.827815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.827941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.827966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.828124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.828153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.828383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.828434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.828577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.828603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.828739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.828766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.828865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.828891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.829027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.829053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.829185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.829213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.829335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.829377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.829517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.829543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.829650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.829675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.829834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.829859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.829982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.830008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.830136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.830162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.830316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.830342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.830430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.830455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.830566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.830591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.830708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.830734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.830855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.830881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.831010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.831035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.831215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.831244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.831371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.831400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.831534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.831560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.831691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.831718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.228 [2024-07-25 19:07:31.831874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.228 [2024-07-25 19:07:31.831900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.228 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.832020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.832071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.832248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.832277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.832441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.832470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.832676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.832706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.832934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.832964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.833085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.833111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.833247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.833275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.833411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.833473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.833612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.833641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.833795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.833823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.834039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.834078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.834223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.834251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.834447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.834515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.834659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.834685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.834810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.834835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.834960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.834986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.835140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.835169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.835274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.835302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.835468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.835497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.835639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.835665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.835784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.835809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.835915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.835940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.836104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.836131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.836236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.836262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.836396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.836422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.836548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.836573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.836696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.836722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.836876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.836904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.837085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.837142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.837298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.837325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.837481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.837508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.837628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.837654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.837751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.837779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.837884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.837910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.838006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.838032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.229 [2024-07-25 19:07:31.838187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.229 [2024-07-25 19:07:31.838213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.229 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.838345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.838371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.838496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.838522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.838663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.838689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.838846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.838874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.839079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.839106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.839235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.839260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.839387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.839421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.839523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.839549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.839700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.839725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.839882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.839907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.840034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.840066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.840231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.840256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.840382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.840407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.840505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.840531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.840657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.840683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.840821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.840850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.840956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.840985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.841138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.841164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.841271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.841296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.841444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.841469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.841596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.841621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.841751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.841776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.841891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.841916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.842070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.842096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.842223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.842248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.842375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.842401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.842522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.842547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.842680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.842705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.842811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.842839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.842990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.843015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.843152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.843178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.843268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.843293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.843448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.843473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.843596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.843621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.843763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.843802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.230 qpair failed and we were unable to recover it. 00:34:20.230 [2024-07-25 19:07:31.843948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.230 [2024-07-25 19:07:31.843980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.844133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.844160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.844287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.844313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.844412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.844437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.844560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.844586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.844704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.844729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.844831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.844857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.844959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.844991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.845132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.845160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.845312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.845337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.845474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.845499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.845617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.845642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.845744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.845769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.845927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.845955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.846092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.846119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.846245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.846270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.846423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.846448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.846603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.846631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.846785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.846810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.846936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.846961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.847071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.847098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.847241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.847267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.847374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.847401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.847528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.847554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.847685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.847711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.847816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.847841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.847958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.847986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.848130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.848156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.231 [2024-07-25 19:07:31.848288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.231 [2024-07-25 19:07:31.848315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.231 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.848479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.848504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.848634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.848659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.848786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.848811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.848938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.848964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.849070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.849096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.849239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.849268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.849394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.849420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.849548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.849573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.849702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.849727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.849844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.849869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.850005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.850034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.850189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.850215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.850345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.850370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.850467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.850493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.850586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.850612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.850767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.850792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.850897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.850922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.851037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.851096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.851204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.851230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.851353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.851378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.851514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.851540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.851662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.851687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.851824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.851849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.851974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.851999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.852145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.852185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.852315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.852344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.852454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.852481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.852607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.852633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.852761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.852786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.852935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.852964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.853131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.853157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.853263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.853288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.853415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.853445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.853602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.853629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.853755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.853781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.232 [2024-07-25 19:07:31.853913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.232 [2024-07-25 19:07:31.853939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.232 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.854037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.854069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.854242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.854268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.854362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.854388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.854547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.854573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.854665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.854691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.854815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.854840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.854954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.854996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.855086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.855118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.855246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.855271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.855397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.855422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.855578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.855604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.855757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.855782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.855939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.855964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.856084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.856119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.856257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.856282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.856406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.856431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.856561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.856587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.856716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.856741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.856850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.856881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.857017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.857069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.857227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.857253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.857408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.857434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.857557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.857582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.857715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.857746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.857864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.857893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.858046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.858080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.858193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.858219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.858345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.858371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.858469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.858496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.858600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.858625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.858781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.858809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.858913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.858938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.859071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.859121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.859275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.859301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.859428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.859454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.859579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.233 [2024-07-25 19:07:31.859605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.233 qpair failed and we were unable to recover it. 00:34:20.233 [2024-07-25 19:07:31.859741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.859768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.859902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.859928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.860054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.860087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.860185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.860212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.860339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.860365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.860485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.860511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.860599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.860625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.860761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.860786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.860886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.860912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.861043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.861076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.861179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.861205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.861306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.861332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.861436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.861462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.861584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.861610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.861742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.861772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.861903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.861930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.862068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.862094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.862225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.862251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.862378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.862404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.862530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.862555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.862686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.862712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.862848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.862894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.863089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.863119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.863250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.863277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.863433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.863458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.863579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.863605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.863761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.863787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.863938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.863964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.864075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.864101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.864234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.864259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.864361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.864386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.864512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.234 [2024-07-25 19:07:31.864538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.234 qpair failed and we were unable to recover it. 00:34:20.234 [2024-07-25 19:07:31.864681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.864706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.864808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.864834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.864931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.864957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.865055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.865088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.865190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.865216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.865341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.865367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.865529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.865555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.865710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.865736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.865862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.865888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.866013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.866042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.866184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.866209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.866334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.866360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.866463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.866488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.866625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.866650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.866766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.866792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.866900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.866925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.867012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.867037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.867203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.867242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.867355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.867382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.867513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.867539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.867647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.867673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.867775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.867801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.867974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.868003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.868156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.868182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.868287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.868312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.868434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.868460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.868552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.868577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.868701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.868727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.868847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.868872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.869008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.869033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.869164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.869189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.869312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.869338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.869467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.869492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.869617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.869643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.869794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.869819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.869985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.870013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.235 qpair failed and we were unable to recover it. 00:34:20.235 [2024-07-25 19:07:31.870148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.235 [2024-07-25 19:07:31.870178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.870278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.870304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.870423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.870449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.870579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.870604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.870739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.870764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.870887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.870915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.871153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.871180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.871307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.871332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.871456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.871481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.871611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.871636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.871735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.871761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.871885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.871912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.872031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.872056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.872183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.872209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.872367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.872405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.872514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.872540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.872672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.872698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.872856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.872882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.873034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.873071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.873217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.873242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.873362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.873387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.873478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.873504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.873630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.873656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.873779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.873805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.873912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.873938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.874068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.874111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.874198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.874224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.874327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.874352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.874516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.874542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.236 [2024-07-25 19:07:31.874666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.236 [2024-07-25 19:07:31.874692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.236 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.874793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.874819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.874939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.874964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.875082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.875108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.875232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.875257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.875404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.875430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.875564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.875589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.875691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.875716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.875892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.875931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.876055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.876100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.876238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.876265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.876391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.876416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.876551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.876579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.876736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.876762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.876872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.876901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.877042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.877076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.877219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.877245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.877381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.877407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.877538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.877564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.877665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.877691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.877790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.877819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.877964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.878004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.878157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.878186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.878338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.878364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.878501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.878527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.878659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.878686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.878788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.878815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.878952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.878991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.879156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.879185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.879342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.879368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.879506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.879532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.879656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.879682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.879803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.879829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.880000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.880038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.880186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.880220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.880382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.880409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.237 qpair failed and we were unable to recover it. 00:34:20.237 [2024-07-25 19:07:31.880544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.237 [2024-07-25 19:07:31.880570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.880686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.880712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.880813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.880838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.880964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.880989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.881113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.881139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.881266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.881291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.881389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.881414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.881539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.881564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.881720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.881745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.881874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.881899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.881990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.882015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.882152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.882178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.882302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.882327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.882474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.882499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.882626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.882651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.882747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.882773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.882878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.882907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.883047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.883093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.883199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.883225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.883357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.883382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.883540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.883565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.883673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.883699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.883803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.883829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.883930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.883957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.884130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.884174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.884289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.884317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.884512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.884560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.884704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.884748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.884900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.884925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.885052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.885082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.885222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.885248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.885372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.885414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.885560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.885587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.885701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.885727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.885877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.885902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.886028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.238 [2024-07-25 19:07:31.886054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.238 qpair failed and we were unable to recover it. 00:34:20.238 [2024-07-25 19:07:31.886190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.886215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.886319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.886345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.886447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.886473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.886640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.886665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.886812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.886838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.886956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.886981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.887137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.887163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.887293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.887319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.887419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.887445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.887548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.887573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.887697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.887722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.887848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.887873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.888032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.888065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.888196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.888221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.888372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.888397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.888522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.888548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.888649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.888675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.888801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.888828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.888981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.889007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.889177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.889221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.889336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.889382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.889530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.889559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.889697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.889723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.889848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.889874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.890027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.890053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.890179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.890209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.890376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.890420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.890564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.890607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.890760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.890785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.890907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.890932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.891106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.891136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.891264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.891292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.891431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.891474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.891629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.891656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.239 [2024-07-25 19:07:31.891764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.239 [2024-07-25 19:07:31.891791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.239 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.891898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.891924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.892050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.892083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.892213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.892239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.892336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.892362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.892496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.892521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.892644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.892669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.892770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.892796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.892960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.892985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.893110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.893137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.893259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.893285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.893390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.893416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.893545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.893571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.893673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.893700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.893854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.893879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.894033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.894064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.894193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.894219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.894372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.894397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.894516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.894542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.894644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.894670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.894830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.894856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.894984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.895010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.895152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.895197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.895372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.895420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.895596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.895640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.895766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.895792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.895953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.895983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.896123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.896152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.896339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.896382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.896533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.896561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.896703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.896728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.896882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.896908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.897070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.897096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.897238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.897267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.897405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.897451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.897613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.897639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.240 [2024-07-25 19:07:31.897773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.240 [2024-07-25 19:07:31.897799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.240 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.897898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.897923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.898073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.898100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.898216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.898260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.898376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.898419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.898575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.898600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.898717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.898742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.898868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.898894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.899018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.899043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.899198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.899242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.899386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.899430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.899553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.899597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.899731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.899757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.899884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.899910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.900039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.900081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.900252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.900298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.900407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.900435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.900563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.900589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.900744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.900770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.900870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.900896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.900999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.901025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.901158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.901185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.901284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.901310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.901437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.901463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.901613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.901639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.901732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.901758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.901896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.901921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.902049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.902085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.902194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.902220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.902361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.902405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.902559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.902588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.902714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.241 [2024-07-25 19:07:31.902740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.241 qpair failed and we were unable to recover it. 00:34:20.241 [2024-07-25 19:07:31.902873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.902899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.903024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.903051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.903174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.903218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.903369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.903411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.903537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.903563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.903712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.903738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.903829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.903854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.903974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.904001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.904110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.904139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.904291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.904333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.904446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.904475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.904650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.904675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.904813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.904839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.904932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.904958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.905128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.905172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.905282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.905308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.905433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.905459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.905583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.905609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.905709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.905735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.905857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.905882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.906033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.906065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.906158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.906183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.906344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.906370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.906488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.906514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.906668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.906694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.906821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.906847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.906968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.906994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.907110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.907140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.907301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.907344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.907453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.907482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.907624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.907649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.907772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.907798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.907950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.907976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.908146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.242 [2024-07-25 19:07:31.908191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.242 qpair failed and we were unable to recover it. 00:34:20.242 [2024-07-25 19:07:31.908378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.908404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.908542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.908586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.908708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.908734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.908857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.908884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.908986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.909016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.909152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.909178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.909315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.909359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.909534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.909577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.909705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.909731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.909853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.909879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.909972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.909998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.910119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.910149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.910311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.910353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.910510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.910536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.910670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.910696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.910849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.910875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.910998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.911023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.911148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.911178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.911354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.911397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.911517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.911559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.911667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.911693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.911820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.911846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.911969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.911995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.912137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.912181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.912354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.912382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.912547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.912593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.912695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.912721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.912858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.912884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.912981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.913007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.913111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.913137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.913238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.913264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.913442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.913485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.913638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.913663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.913817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.913843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.243 [2024-07-25 19:07:31.913966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.243 [2024-07-25 19:07:31.913992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.243 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.914138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.914167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.914323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.914365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.914508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.914553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.914710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.914736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.914861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.914886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.915017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.915043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.915242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.915286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.915431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.915478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.915630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.915655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.915750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.915780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.915937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.915963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.916089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.916116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.916266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.916310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.916492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.916538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.916661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.916688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.916839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.916865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.916957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.916983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.917145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.917188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.917346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.917372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.917523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.917548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.917646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.917671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.917827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.917853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.918005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.918030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.918162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.918188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.918286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.918313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.918438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.918463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.918563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.918589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.918689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.918715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.918821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.918860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.918969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.918995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.919170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.919200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.919339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.919368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.919508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.919536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.919678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.919706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.919866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.919915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.920040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.920087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.244 qpair failed and we were unable to recover it. 00:34:20.244 [2024-07-25 19:07:31.920248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.244 [2024-07-25 19:07:31.920292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.920465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.920508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.920603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.920629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.920766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.920793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.920920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.920946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.921046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.921081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.921194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.921224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.921404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.921431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.921607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.921635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.921793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.921818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.921916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.921941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.922069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.922095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.922259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.922284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.922397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.922425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.922593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.922622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.922757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.922799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.922921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.922946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.923072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.923098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.923200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.923226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.923398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.923426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.923593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.923621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.923734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.923763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.923935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.923981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.924091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.924117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.924284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.924310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.924456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.924485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.924674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.924717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.924851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.924877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.924997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.925024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.925159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.925189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.925362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.925388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.925526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.925570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.925694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.925721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.925851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.925878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.926006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.926033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.926207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.926236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.926390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.245 [2024-07-25 19:07:31.926419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.245 qpair failed and we were unable to recover it. 00:34:20.245 [2024-07-25 19:07:31.926584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.926612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.926749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.926777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.926891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.926919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.927070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.927096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.927230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.927255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.927393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.927419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.927568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.927596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.927703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.927732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.927906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.927935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.928056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.928089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.928208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.928233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.928383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.928411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.928547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.928575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.928717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.928745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.928884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.928913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.929077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.929117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.929249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.929276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.929426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.929477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.929628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.929670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.929835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.929861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.929968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.929995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.930124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.930150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.930323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.930365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.930536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.930579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.930704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.930730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.930857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.930883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.931011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.931036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.931190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.246 [2024-07-25 19:07:31.931220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.246 qpair failed and we were unable to recover it. 00:34:20.246 [2024-07-25 19:07:31.931336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.931364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.931535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.931563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.931727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.931755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.931900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.931928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.932080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.932123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.932270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.932299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.932472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.932500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.932618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.932646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.932771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.932799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.932947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.932972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.933093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.933119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.933264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.933293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.933406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.933435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.933596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.933624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.933722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.933751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.933888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.933917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.934068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.934115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.934264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.934296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.934437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.934481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.934633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.934676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.934864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.934892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.935037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.935069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.935172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.935197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.935341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.935383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.935528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.935570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.935679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.935723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.935849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.935875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.936028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.936054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.936209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.936237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.936373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.936417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.936572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.936615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.936772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.936798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.936929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.936956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.937067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.937110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.937257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.937285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.937430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.937458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.937590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.247 [2024-07-25 19:07:31.937618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.247 qpair failed and we were unable to recover it. 00:34:20.247 [2024-07-25 19:07:31.937761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.937789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.937902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.937927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.938048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.938086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.938190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.938216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.938337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.938365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.938526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.938555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.938686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.938715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.938862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.938888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.939001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.939027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.939129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.939155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.939254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.939282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.939454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.939481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.939588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.939616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.939792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.939820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.939990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.940017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.940175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.940201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.940338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.940367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.940531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.940559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.940699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.940727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.940895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.940923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.941075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.941119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.941246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.941272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.941401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.941426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.941601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.941629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.941776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.941818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.941971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.941996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.942142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.942168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.942302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.942328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.942501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.942529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.942686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.942714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.942859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.942887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.943029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.943054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.943192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.943217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.943365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.943397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.943530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.943558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.943679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.943704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.943857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.943885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.944030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.944056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.248 qpair failed and we were unable to recover it. 00:34:20.248 [2024-07-25 19:07:31.944215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.248 [2024-07-25 19:07:31.944240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.944360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.944385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.944526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.944554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.944697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.944725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.944868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.944897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.945085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.945124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.945259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.945286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.945405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.945436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.945631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.945673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.945777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.945803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.945933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.945958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.946085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.946112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.946215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.946241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.946368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.946393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.946527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.946553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.946729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.946772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.946884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.946910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.947034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.947066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.947221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.947249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.947390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.947419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.947614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.947643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.947807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.947835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.947938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.947971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.948124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.948152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.948267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.948310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.948437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.948480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.948620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.948663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.948803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.948847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.948939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.948964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.949117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.949144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.949270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.949296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.949394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.949420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.949575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.949601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.949726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.949752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.949885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.949911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.950009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.950036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.950191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.950217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.950369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.950397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.249 [2024-07-25 19:07:31.950527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.249 [2024-07-25 19:07:31.950556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.249 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.950697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.950725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.950869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.950897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.951020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.951047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.951159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.951185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.951338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.951364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.951473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.951503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.951644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.951670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.951797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.951823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.951983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.952009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.952155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.952184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.952371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.952409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.952606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.952636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.952766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.952796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.952942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.952969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.953121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.953151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.953289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.953318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.953501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.953529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.953679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.953706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.953832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.953857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.953992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.954019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.954202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.954231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.954388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.954416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.954539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.954582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.954737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.954762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.954890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.954916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.955053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.955115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.955271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.955298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.955394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.955419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.955530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.955558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.955698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.955723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.955849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.955874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.955996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.956021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.956138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.956168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.956300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.956328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.250 qpair failed and we were unable to recover it. 00:34:20.250 [2024-07-25 19:07:31.956467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.250 [2024-07-25 19:07:31.956496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.956641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.956667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.956799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.956824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.956964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.957004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.957174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.957205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.957439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.957469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.957641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.957668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.957775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.957802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.957935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.957961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.958096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.958127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.958261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.958289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.958450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.958478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.958592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.958617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.958773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.958798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.958927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.958952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.959077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.959105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.959240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.959266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.959397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.959422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.959578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.959603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.959734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.959760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.959887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.959913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.960069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.960112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.960326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.960354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.960470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.960496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.960601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.960626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.960721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.960747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.960874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.960899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.960998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.961023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.961137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.961164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.961269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.961295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.961394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.961419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.961513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.961538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.961688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.961713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.961801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.961826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.961955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.961980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.962135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.962164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.962342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.251 [2024-07-25 19:07:31.962370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.251 qpair failed and we were unable to recover it. 00:34:20.251 [2024-07-25 19:07:31.962488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.962513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.962629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.962654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.962807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.962832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.962964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.962989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.963101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.963129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.963267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.963292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.963432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.963461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.963580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.963605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.963703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.963728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.963819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.963845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.964000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.964025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.964152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.964182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.964355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.964380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.964480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.964505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.964656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.964681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.964839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.964864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.964963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.964989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.965106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.965136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.965351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.965376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.965530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.965555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.965641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.965670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.965823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.965849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.966000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.966026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.966177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.966205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.966342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.966371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.966558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.966586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.966753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.966778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.966934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.966960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.967107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.967136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.967330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.967358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.967534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.967590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.967709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.967734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.967886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.967912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.968010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.968035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.968181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.968210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.968345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.968373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.968501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.968530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.968703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.968728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.968829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.968854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.968986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.969011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.969110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.969136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.969263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.969289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.252 [2024-07-25 19:07:31.969441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.252 [2024-07-25 19:07:31.969469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.252 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.969604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.969632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.969787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.969816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.970003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.970031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.970180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.970209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.970390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.970419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.970602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.970630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.970764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.970792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.970901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.970929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.971075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.971101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.971230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.971256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.971356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.971381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.971478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.971503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.971602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.971628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.971777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.971803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.971897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.971922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.972023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.972048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.972183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.972209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.972306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.972332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.972453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.972479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.972576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.972601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.972702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.972727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.972849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.972875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.972992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.973020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.973238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.973288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.973428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.973472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.973621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.973664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.973769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.973795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.973883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.973909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.974039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.974072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.974192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.974236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.974381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.974423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.974551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.974577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.974678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.974705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.974805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.974830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.974926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.974951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.975053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.975086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.975187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.975212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.975315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.975340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.975473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.975499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.975606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.975631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.975757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.975782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.253 qpair failed and we were unable to recover it. 00:34:20.253 [2024-07-25 19:07:31.975907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.253 [2024-07-25 19:07:31.975934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.976065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.976092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.976220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.976264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.976412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.976456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.976616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.976642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.976775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.976801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.976903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.976930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.977076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.977102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.977230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.977256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.977366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.977406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.977552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.977580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.977723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.977752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.977901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.977927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.978028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.978053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.978191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.978217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.978337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.978365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.978467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.978495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.978637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.978665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.978801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.978829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.978955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.978983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.979136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.979162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.979300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.979325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.979493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.979521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.979687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.979715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.979844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.979872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.980032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.980067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.980220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.980245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.980400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.980425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.980521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.980564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.980696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.980724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.980893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.980921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.981071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.981117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.981241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.981267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.981391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.981417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.981544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.981587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.981747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.981775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.981939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.981967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.982105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.982148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.982251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.982277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.982426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.982452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.982587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.982615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.982732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.982760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.982881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.982924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.983025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.254 [2024-07-25 19:07:31.983053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.254 qpair failed and we were unable to recover it. 00:34:20.254 [2024-07-25 19:07:31.983177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.983203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.983359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.983385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.983531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.983559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.983698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.983727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.983868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.983895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.984066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.984109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.984214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.984239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.984360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.984385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.984516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.984559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.984736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.984764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.984932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.984960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.985098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.985126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.985245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.985271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.985450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.985478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.985624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.985656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.985791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.985820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.985971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.986000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.986174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.986200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.986328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.986369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.986512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.986538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.986640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.986666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.986799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.986825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.986918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.986942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.987069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.987095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.987193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.987218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.987344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.987369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.987539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.987566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.987736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.987764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.987915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.987940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.988067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.988093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.988275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.255 [2024-07-25 19:07:31.988303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.255 qpair failed and we were unable to recover it. 00:34:20.255 [2024-07-25 19:07:31.988452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.988477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.988576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.988601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.988749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.988777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.988941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.988966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.989105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.989134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.989267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.989295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.989391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.989432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.989562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.989588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.989730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.989755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.989875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.989901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.990064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.990090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.990292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.990318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.990411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.990437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.990564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.990590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.990769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.990795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.990960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.990989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.991096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.991137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.991287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.991313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.991444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.991470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.991572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.991598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.991774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.991802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.991947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.991973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.992125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.992166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.992307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.992335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.992484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.992514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.992613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.992639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.992767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.992795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.992938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.992964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.993073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.993099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.993218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.993244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.993336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.993361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.993486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.256 [2024-07-25 19:07:31.993511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.256 qpair failed and we were unable to recover it. 00:34:20.256 [2024-07-25 19:07:31.993639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.993667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.993823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.993849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.993946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.993971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.994095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.994121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.994249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.994274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.994369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.994395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.994527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.994555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.994735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.994760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.994886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.994928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.995094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.995123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.995246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.995271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.995364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.995389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.995565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.995593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.995777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.995803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.995921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.995964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.996075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.996117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.996245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.996270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.996359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.996384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.996540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.996568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.996709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.996739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.996862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.996889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.996998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.997026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.997147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.997173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.997327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.997352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.997540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.997568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.997676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.997702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.997787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.997812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.997932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.997960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.998077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.998102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.998230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.998256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.998404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.998432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.998548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.998574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.998676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.998701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.998893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.998918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.999015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.999041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.999173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.257 [2024-07-25 19:07:31.999199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.257 qpair failed and we were unable to recover it. 00:34:20.257 [2024-07-25 19:07:31.999321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:31.999346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:31.999475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:31.999500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:31.999594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:31.999619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:31.999736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:31.999765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:31.999935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:31.999960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.000064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.000090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.000245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.000271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.000423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.000448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.000598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.000626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.000793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.000821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.000956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.000981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.001111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.001138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.001321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.001349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.001499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.001524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.001619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.001645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.001774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.001800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.001924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.001949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.002077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.002103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.002259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.002288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.002396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.002421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.002519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.002544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.002696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.002724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.002858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.002887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.002990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.003018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.003154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.003184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.003277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.003303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.003402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.003429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.003590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.003616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.003741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.003767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.003869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.003894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.004021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.004046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.004171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.004197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.004324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.004365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.004536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.004562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.004681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.004707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.004800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.004825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.004986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.258 [2024-07-25 19:07:32.005011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.258 qpair failed and we were unable to recover it. 00:34:20.258 [2024-07-25 19:07:32.005166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.005192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.005335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.005363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.005516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.005542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.005636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.005661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.005811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.005836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.005980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.006008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.006131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.006158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.006320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.006360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.006516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.006541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.006663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.006688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.006785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.006810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.006939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.006965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.007069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.007095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.007216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.007242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.007345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.007374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.007478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.007504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.007603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.007628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.007771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.007798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.007938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.007964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.008091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.008117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.008282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.008307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.008435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.008460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.008586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.008611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.008745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.008770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.008866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.008892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.009025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.009050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.009184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.009209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.009336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.009361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.009483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.009509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.009665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.009706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.009852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.009877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.009974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.009999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.010155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.010181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.010311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.010336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.010457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.010482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.010660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.010688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.010803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.010828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.010948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.259 [2024-07-25 19:07:32.010974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.259 qpair failed and we were unable to recover it. 00:34:20.259 [2024-07-25 19:07:32.011119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.011147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.011270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.011296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.011394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.011419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.011573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.011598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.011797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.011822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.011917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.011958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.012092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.012119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.012270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.012295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.012418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.012460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.012563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.012591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.012729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.012755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.012904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.012945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.013051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.013099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.013228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.013254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.013384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.013409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.013511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.013536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.013663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.013688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.013781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.013811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.013941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.013966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.014118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.014145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.014290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.014318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.014483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.014508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.014662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.014687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.014787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.014812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.014941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.014966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.015070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.015096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.015226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.015252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.015347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.015373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.015522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.015547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.015642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.015684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.015786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.015814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.016017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.260 [2024-07-25 19:07:32.016045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.260 qpair failed and we were unable to recover it. 00:34:20.260 [2024-07-25 19:07:32.016196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.016222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.016364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.016392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.016511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.016536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.016652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.016677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.016850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.016878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.017022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.017047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.017208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.017248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.017392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.017421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.017602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.017628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.017755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.017799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.017940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.017969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.018084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.018111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.018272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.018297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.018455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.018484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.018653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.018679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.018800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.018826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.018992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.019017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.019121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.019147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.019270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.019296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.019451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.019480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.019628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.019654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.019801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.019826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.019935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.019963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.020092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.020118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.020277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.020302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.020483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.020511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.020660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.020690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.020801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.020826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.020929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.020955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.021105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.021131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.021231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.021271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.021405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.021433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.021573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.021599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.021705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.021730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.021878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.021904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.022041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.022102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.022226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.022251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.022417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.261 [2024-07-25 19:07:32.022446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.261 qpair failed and we were unable to recover it. 00:34:20.261 [2024-07-25 19:07:32.022565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.022590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.022716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.022741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.022862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.022904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.023054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.023087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.023206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.023232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.023365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.023390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.023541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.023566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.023716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.023742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.023841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.023866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.023992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.024018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.024116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.024142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.024256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.024285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.024388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.024430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.024560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.024586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.024741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.024769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.024894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.024924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.025054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.025087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.025214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.025239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.025379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.025404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.025496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.025521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.025658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.025686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.025797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.025822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.025955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.025979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.026076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.026102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.026224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.026250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.026423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.026451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.026556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.026584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.026722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.026747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.026869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.026895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.027116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.027143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.027241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.027266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.027417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.027458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.027625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.027653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.027791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.027816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.028064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.262 [2024-07-25 19:07:32.028093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.262 qpair failed and we were unable to recover it. 00:34:20.262 [2024-07-25 19:07:32.028227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.028252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.028381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.028406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.028509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.028534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.028714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.028740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.028832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.028857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.028969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.028995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.029149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.029175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.029364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.029389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.029485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.029510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.029666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.029695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.029838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.029863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.030022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.030048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.030234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.030260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.030390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.030419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.030545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.030586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.030722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.030750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.030872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.030897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.031005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.031031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.031136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.031162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.031299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.031324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.031443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.031485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.031626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.031658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.031831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.031857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.031985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.032027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.032187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.032213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.032333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.032359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.032454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.032479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.032620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.032648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.032815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.032840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.032968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.032994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.033175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.033201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.033327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.033352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.033450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.033476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.033600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.033625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.033739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.033765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.033921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.033946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.034133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.034159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.034282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.263 [2024-07-25 19:07:32.034308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.263 qpair failed and we were unable to recover it. 00:34:20.263 [2024-07-25 19:07:32.034425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.034450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.034594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.034622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.034739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.034764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.034895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.034920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.035024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.035049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.035181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.035207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.035329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.035354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.035538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.035566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.035707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.035732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.035832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.035857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.036035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.036074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.036246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.036271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.036439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.036468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.036609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.036637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.036779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.036804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.036910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.036936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.037069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.037095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.037198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.037224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.037323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.037348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.037511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.037537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.037636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.037662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.037758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.037784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.037921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.037950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.038088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.038114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.038236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.038262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.038418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.038447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.038625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.038651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.038749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.038774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.038901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.038927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.039029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.039055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.039184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.039209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.039390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.039418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.264 qpair failed and we were unable to recover it. 00:34:20.264 [2024-07-25 19:07:32.039530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.264 [2024-07-25 19:07:32.039556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.039709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.039734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.039917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.039946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.040087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.040113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.040240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.040265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.040365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.040390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.040542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.040568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.040719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.040747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.040898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.040923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.041116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.041142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.041294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.041320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.041491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.041517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.041643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.041668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.041763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.041789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.041932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.041960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.042140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.042166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.042267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.042292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.042474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.042502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.042654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.042679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.042803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.042848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.042989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.043018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.043182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.043208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.043334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.043375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.043515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.043543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.043687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.043713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.043841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.043867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.044055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.044089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.044226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.044251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.044419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.044448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.044598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.044623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.044760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.044785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.044935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.044961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.045112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.045142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.045295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.045321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.045448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.045474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.265 [2024-07-25 19:07:32.045600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.265 [2024-07-25 19:07:32.045626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.265 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.045751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.045776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.045931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.045956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.046120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.046146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.046249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.046275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.046397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.046422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.046588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.046613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.046737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.046763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.046861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.046886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.047041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.047074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.047232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.047260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.047368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.047398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.047535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.047561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.047665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.047691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.047791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.047816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.047941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.047969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.048143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.048169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.048300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.048326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.048477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.048502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.048657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.048682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.048806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.048831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.048982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.049011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.049161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.049188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.049284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.049310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.049435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.049461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.049601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.049640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.049763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.049809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.049953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.049982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.050100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.050127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.050231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.050257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.050374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.050417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.050547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.050574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.050674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.050699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.050800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.050826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.050927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.050953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.051047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.051081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.051236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.266 [2024-07-25 19:07:32.051265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.266 qpair failed and we were unable to recover it. 00:34:20.266 [2024-07-25 19:07:32.051406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 19:07:32.051434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 19:07:32.051597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 19:07:32.051623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.267 [2024-07-25 19:07:32.051811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.267 [2024-07-25 19:07:32.051840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.267 qpair failed and we were unable to recover it. 00:34:20.552 [2024-07-25 19:07:32.051978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.052005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.052133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.052159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.052295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.052321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.052441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.052469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.052649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.052677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.052808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.052836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.052955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.052984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.053110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.053137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.053325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.053368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.053544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.053586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.053728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.053772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.053866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.053893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.054037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.054077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.054260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.054288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.054456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.054490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.054618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.054646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.054762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.054790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.054913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.054939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.055072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.055098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.055202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.055231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.055342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.055383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.055506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.055532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.055714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.055742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.055856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.055884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.056055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.056089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.056216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.056242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.056393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.056421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.056563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.056591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.056696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.056724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.056867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.056895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.057045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.057080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.057208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.057233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.057356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.057381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.057530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.057558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.057688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.057716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.057859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.057887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.058006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.058031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.553 [2024-07-25 19:07:32.058204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.553 [2024-07-25 19:07:32.058231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.553 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.058335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.058378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.058517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.058550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.058684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.058712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.058858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.058886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.059025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.059051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.059181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.059207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.059311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.059354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.059549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.059578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.059728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.059771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.059886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.059914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.060037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.060070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.060197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.060222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.060373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.060401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.060518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.060560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.060674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.060702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.060810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.060839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.060972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.061001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.061157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.061183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.061305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.061331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.061442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.061470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.061608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.061636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.061778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.061807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.061967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.061995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.062100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.062142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.062273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.062299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.062446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.062474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.062634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.062662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.062764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.062792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.062919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.062955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.063071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.063114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.063236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.063262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.063415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.063440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.063590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.063619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.063755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.063784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.063947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.063975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.064104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.064130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.064223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.064249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.554 [2024-07-25 19:07:32.064403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.554 [2024-07-25 19:07:32.064428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.554 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.064571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.064599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.064732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.064761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.064885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.064927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.065092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.065139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.065272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.065298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.065484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.065510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.065630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.065658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.065800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.065829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.065970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.065999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.066104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.066145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.066279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.066305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.066434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.066459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.066556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.066581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.066731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.066759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.066888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.066916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.067050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.067085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.067232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.067257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.067377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.067403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.067503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.067529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.067657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.067682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.067806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.067831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.067953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.067994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.068144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.068174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.068312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.068338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.068463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.068504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.068638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.068666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.068815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.068841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.068968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.068994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.069159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.069189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.069342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.069367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.069521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.069547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.069680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.069713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.069836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.069862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.069963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.069989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.070114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.070140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.070240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.070265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.555 [2024-07-25 19:07:32.070388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.555 [2024-07-25 19:07:32.070414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.555 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.070561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.070589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.070732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.070757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.070888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.070914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.071069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.071095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.071222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.071248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.071373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.071399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.071573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.071601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.071765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.071791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.071932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.071961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.072121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.072147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.072235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.072261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.072359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.072385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.072480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.072506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.072622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.072648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.072771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.072796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.072945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.072970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.073096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.073122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.073221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.073246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.073397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.073426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.073570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.073596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.073718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.073743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3692025 Killed "${NVMF_APP[@]}" "$@" 00:34:20.556 [2024-07-25 19:07:32.073876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.073905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.074048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.074093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:20.556 [2024-07-25 19:07:32.074266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:20.556 [2024-07-25 19:07:32.074294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:20.556 [2024-07-25 19:07:32.074437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.074466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:20.556 [2024-07-25 19:07:32.074590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.074616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.556 [2024-07-25 19:07:32.074747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.074772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.074940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.074966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.075073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.075107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.075236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.075262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.075395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.075424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.075541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.075566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.075685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.075711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.075819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.075845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.556 [2024-07-25 19:07:32.075993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.556 [2024-07-25 19:07:32.076021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.556 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.076178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.076204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.076334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.076360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.076486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.076511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.076618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.076645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.076809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.076835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.076963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.076988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.077114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.077157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.077332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.077358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.077459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.077485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.077580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.077606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.077708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.077734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.077861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.077891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.077992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.078018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.078184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.078212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.078365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.078391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3692583 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:20.557 [2024-07-25 19:07:32.078483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.078508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3692583 00:34:20.557 [2024-07-25 19:07:32.078666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.078695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3692583 ']' 00:34:20.557 [2024-07-25 19:07:32.078818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.078844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.557 [2024-07-25 19:07:32.078944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.078969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.557 [2024-07-25 19:07:32.079155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.079184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:20.557 [2024-07-25 19:07:32.079306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.079332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.557 [2024-07-25 19:07:32.079456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.079482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.079651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.079676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.079802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.079827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.079926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.079951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.080112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.080142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.080265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.557 [2024-07-25 19:07:32.080292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.557 qpair failed and we were unable to recover it. 00:34:20.557 [2024-07-25 19:07:32.080395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.080421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.080545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.080574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.080726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.080751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.080850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.080876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.080999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.081027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.081195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.081221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.081333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.081358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.081527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.081552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.081652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.081677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.081804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.081830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.081982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.082010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.082116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.082158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.082279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.082304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.082500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.082526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.082649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.082675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.082773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.082798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.082932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.082960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.083125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.083151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.083286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.083311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.083483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.083509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.083636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.083665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.083764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.083789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.083945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.083974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.084138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.084164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.084332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.084360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.084526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.084554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.084706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.084731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.084899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.084928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.085089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.085115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.085241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.085267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.085399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.085425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.085547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.085575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.085722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.085747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.085873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.085899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.086047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.086095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.086237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.086267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.086431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.086458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.086585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.086611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.558 qpair failed and we were unable to recover it. 00:34:20.558 [2024-07-25 19:07:32.086748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.558 [2024-07-25 19:07:32.086774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.086904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.086931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.087030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.087057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.087167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.087193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.087321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.087346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.087472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.087497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.087614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.087660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.087830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.087859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.087994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.088025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.088229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.088258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.088399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.088428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.088605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.088634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.088804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.088833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.089069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.089118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.089221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.089248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.089350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.089376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.089500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.089526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.089633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.089659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.089802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.089827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.089983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.090009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.091147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.091190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.091378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.091406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.091538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.091565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.091686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.091712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.091836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.091862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.092016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.092041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.092177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.092203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.092334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.092359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.092513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.092538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.092668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.092693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.092798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.092824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.092944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.092970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.093108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.093134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.093236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.093261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.093361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.093387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.093510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.093535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.093648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.093673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.559 [2024-07-25 19:07:32.093778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.559 [2024-07-25 19:07:32.093804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.559 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.093921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.093960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.094100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.094128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.094263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.094290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.094393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.094421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.094549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.094575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.094698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.094724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.094854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.094880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.094985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.095011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.095136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.095162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.095256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.095281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.095382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.095408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.095497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.095522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.095658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.095684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.095811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.095837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.095966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.095991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.096107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.096146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.096267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.096294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.096418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.096443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.096545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.096570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.096727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.096752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.096848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.096874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.096984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.097009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.097163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.097190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.097314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.097340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.097434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.097459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.097591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.097623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.097721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.097747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.097836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.097861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.097967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.097993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.098119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.098145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.098244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.098269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.098375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.098400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.098492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.098517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.098641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.098666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.098775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.098801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.098926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.098951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.099072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.099097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.560 [2024-07-25 19:07:32.099236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.560 [2024-07-25 19:07:32.099261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.560 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.099399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.099425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.099548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.099574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.099712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.099737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.099873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.099898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.100018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.100043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.100184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.100210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.100306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.100331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.100431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.100456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.100580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.100606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.100731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.100756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.100845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.100870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.100972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.100997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.101100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.101127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.101281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.101307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.101437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.101462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.101599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.101625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.101726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.101752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.101886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.101911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.102018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.102043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.102160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.102185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.102287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.102312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.102447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.102473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.102599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.102624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.102754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.102780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.102876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.102901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.102993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.103018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.103156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.103182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.103311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.103336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.103441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.103471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.103598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.103623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.103719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.103744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.103863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.103902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.104033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.104089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.104228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.104255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.104357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.104383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.104518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.104544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.561 qpair failed and we were unable to recover it. 00:34:20.561 [2024-07-25 19:07:32.104699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.561 [2024-07-25 19:07:32.104724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.104850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.104878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.104982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.105009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.105119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.105145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.105253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.105278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.105374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.105400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.105524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.105550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.105678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.105703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.105837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.105862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.105966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.105992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.106093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.106119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.106223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.106248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.106370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.106395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.106485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.106510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.106644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.106670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.106784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.106824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.106960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.106987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.107105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.107131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.107240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.107266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.107359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.107390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.107557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.107582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.107710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.107735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.107843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.107871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.107998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.108024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.108131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.108158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.108288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.108314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.108440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.108466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.108579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.108605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.108713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.108739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.108896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.108922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.109022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.109047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.109158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.109184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.109285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.562 [2024-07-25 19:07:32.109310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.562 qpair failed and we were unable to recover it. 00:34:20.562 [2024-07-25 19:07:32.109437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.109462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.109595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.109620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.109720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.109745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.109899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.109938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.110078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.110106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.110235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.110260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.110415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.110441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.110556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.110582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.110694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.110720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.110875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.110902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.111034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.111074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.111180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.111206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.111302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.111327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.111420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.111450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.111539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.111564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.111719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.111746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.111878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.111903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.112031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.112057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.112189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.112215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.112345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.112371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.112496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.112522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.112651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.112677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.112843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.112882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.113025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.113053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.113187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.113213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.113340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.113365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.113471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.113497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.113627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.113653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.113779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.113805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.113936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.113962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.114093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.114121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.114253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.114278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.114411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.114436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.114561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.114586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.114735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.114761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.114890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.114915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.115013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.115039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.115153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.563 [2024-07-25 19:07:32.115180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.563 qpair failed and we were unable to recover it. 00:34:20.563 [2024-07-25 19:07:32.115313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.115338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.115439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.115465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.115594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.115621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.115734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.115761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.115859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.115886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.116017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.116043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.116201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.116235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.116382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.116420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.116586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.116613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.116745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.116771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.116875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.116900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.116991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.117017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.117156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.117183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.117310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.117335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.117449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.117474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.117576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.117602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.117712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.117741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.117871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.117898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.118035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.118068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.118201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.118227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.118354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.118380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.118513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.118539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.118672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.118698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.118802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.118827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.118969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.118994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.119135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.119161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.119263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.119289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.119396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.119421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.119525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.119552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.119670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.119708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.119855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.119882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.120009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.120035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.120147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.120174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.120278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.120304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.120433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.120458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.120585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.120612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.120752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.120777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.564 [2024-07-25 19:07:32.120865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.564 [2024-07-25 19:07:32.120891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.564 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.120990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.121016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.121137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.121163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.121302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.121328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.121430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.121455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.121558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.121584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.121715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.121741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.121871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.121897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.122053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.122098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.122228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.122254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.122350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.122376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.122499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.122524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.122653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.122678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.122802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.122827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.122984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.123009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.123171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.123198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.123300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.123326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.123451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.123476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.123602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.123627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.123737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.123764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.123921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.123946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.124075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.124101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.124227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.124253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.124350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.124376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.124469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.124494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.124595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.124620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.124723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.124748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.124846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.124873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.124976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.125002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.125137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.125163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.125315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.125340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.125435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.125460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.125591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.125616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.125757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.125796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.125921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.125948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.126100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.126127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.126263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.126289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.565 [2024-07-25 19:07:32.126444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.565 [2024-07-25 19:07:32.126469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.565 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.126482] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:20.566 [2024-07-25 19:07:32.126553] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.566 [2024-07-25 19:07:32.126566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.126591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.126716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.126741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.126868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.126905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.127040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.127074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.127202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.127229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.127360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.127387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.127518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.127545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.127666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.127693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.127837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.127864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.128001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.128040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.128164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.128198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.128311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.128339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.128441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.128468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.128569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.128595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.128718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.128744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.128850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.128877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.129011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.129039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.129157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.129185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.129305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.129331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.129484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.129510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.129615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.129645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.129748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.129774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.129884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.129909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.130082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.130121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.130253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.130280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.130409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.130435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.130527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.130553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.130651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.130677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.130797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.130823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.130953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.130979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.131084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.131110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.131208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.131233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.131325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.131351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.566 qpair failed and we were unable to recover it. 00:34:20.566 [2024-07-25 19:07:32.131480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.566 [2024-07-25 19:07:32.131506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.131638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.131664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.131775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.131803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.131931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.131956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.132085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.132111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.132208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.132233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.132360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.132385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.132484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.132509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.132611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.132637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.132780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.132805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.132901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.132926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.133018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.133043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.133181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.133206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.133303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.133329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.133432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.133462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.133554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.133579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.133724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.133763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.133871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.133898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.133998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.134025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.134149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.134175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.134271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.134297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.134405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.134431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.134529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.134555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.134709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.134735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.134845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.134870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.135010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.135035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.135174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.135200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.135311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.135337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.135436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.135461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.135568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.135594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.135692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.135717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.135841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.135867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.135984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.136023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.136185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.567 [2024-07-25 19:07:32.136217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.567 qpair failed and we were unable to recover it. 00:34:20.567 [2024-07-25 19:07:32.136329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.136355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.136508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.136535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.136663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.136689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.136826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.136851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.136959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.136984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.137088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.137116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.137252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.137279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.137408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.137435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.137590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.137615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.137744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.137769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.137870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.137895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.138000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.138025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.138189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.138215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.138316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.138341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.138464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.138489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.138618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.138643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.138737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.138762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.138939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.138964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.139068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.139094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.139220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.139245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.139368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.139398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.139498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.139524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.139686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.139712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.139812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.139837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.139955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.139980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.140113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.140141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.140284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.140310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.140462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.140488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.568 [2024-07-25 19:07:32.140640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.568 [2024-07-25 19:07:32.140666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.568 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.140767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.140793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.140946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.140972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.141111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.141139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.141239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.141265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.141364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.141390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.141488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.141515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.141646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.141672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.141769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.141794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.141951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.141978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.142079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.142106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.142206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.142233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.142360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.142386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.142542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.142568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.142698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.142723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.142819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.142845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.142973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.142999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.143102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.143129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.143235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.143260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.143375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.143402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.143498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.143525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.143653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.143679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.143782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.143807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.143934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.143959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.144065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.144093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.144222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.144249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.144376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.144402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.144529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.144556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.144661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.144687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.144819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.144846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.144982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.145009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.145148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.145174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.145304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.569 [2024-07-25 19:07:32.145334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.569 qpair failed and we were unable to recover it. 00:34:20.569 [2024-07-25 19:07:32.145436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.145463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.145574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.145599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.145707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.145732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.145865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.145890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.145997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.146023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.146158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.146184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.146276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.146302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.146402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.146428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.146522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.146547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.146640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.146665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.146822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.146847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.146938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.146964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.147127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.147152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.147261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.147287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.147418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.147443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.147568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.147593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.147749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.147774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.147879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.147904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.148027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.148052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.148152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.148178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.148327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.148352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.148479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.148504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.148646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.148671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.148799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.148824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.148955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.148979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.149077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.149106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.149238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.149264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.149370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.149396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.149526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.149552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.149678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.149703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.149827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.570 [2024-07-25 19:07:32.149852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.570 qpair failed and we were unable to recover it. 00:34:20.570 [2024-07-25 19:07:32.149976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.150001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.150109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.150135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.150287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.150312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.150474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.150499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.150651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.150676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.150801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.150826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.150929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.150954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.151088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.151114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.151213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.151242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.151346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.151371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.151504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.151530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.151655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.151680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.151836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.151862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.151953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.151977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.152098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.152139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.152289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.152316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.152414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.152440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.152576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.152602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.152703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.152729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.152887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.152913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.153044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.153078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.153219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.153245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.153381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.153407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.153532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.153557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.153655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.153681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.153789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.153815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.153976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.154001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.154125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.154152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.154300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.154326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.154480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.154506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.154659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.154685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.154818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.154843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.571 [2024-07-25 19:07:32.154971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.571 [2024-07-25 19:07:32.154997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.571 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.155154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.155180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.155279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.155305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.155437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.155468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.155608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.155634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.155760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.155786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.155887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.155915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.156024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.156050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.156215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.156240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.156340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.156365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.156459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.156484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.156583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.156608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.156710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.156737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.156897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.156923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.157039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.157070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.157204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.157230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.157396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.157422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.157578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.157604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.157735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.157762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.157888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.157914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.158047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.158089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.158249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.158274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.158374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.158399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.158521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.158546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.572 [2024-07-25 19:07:32.158652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.572 [2024-07-25 19:07:32.158677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.572 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.158775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.158800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.158926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.158952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.159053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.159088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.159221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.159247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.159343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.159369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.159480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.159506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.159608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.159634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.159755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.159781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.159882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.159908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.160055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.160085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.160183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.160208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.160331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.160356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.160463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.160488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.160586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.160611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.160703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.160728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.160825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.160852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.161003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.161029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.161151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.161177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.161274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.161304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.161452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.161477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.161575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.161601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.161703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.161728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.161823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.161848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.161975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.162000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.162099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.162125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.162224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.162249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.162359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.162384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.162512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.162537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.162664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.162690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.162816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.162842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.162956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.162981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.163078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.573 [2024-07-25 19:07:32.163105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.573 qpair failed and we were unable to recover it. 00:34:20.573 [2024-07-25 19:07:32.163240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.163266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.163383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.163408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.163533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.163558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.163659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.163684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.163813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.163838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.163939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.163964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.164063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.164088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.164222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.164249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.164377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.164402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.164527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.164552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.164705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.164730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.164858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.164883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.164980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.165005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.165118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.165144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.165239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.165264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.165389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.165414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.165540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.165565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.165690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.165715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.165833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.165858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.165962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.165988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.574 [2024-07-25 19:07:32.166090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.166116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.166217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.166242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.166343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.166370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.166493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.166519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.166640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.166666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.166799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.166824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.166954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.166980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.167081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.167107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.167263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.167288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.167412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.167438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.167535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.574 [2024-07-25 19:07:32.167561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.574 qpair failed and we were unable to recover it. 00:34:20.574 [2024-07-25 19:07:32.167689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.167714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.167810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.167835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.167969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.167994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.168105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.168131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.168238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.168263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.168424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.168449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.168602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.168628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.168732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.168757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.168881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.168913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.169014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.169039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.169180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.169206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.169339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.169364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.169499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.169524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.169633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.169659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.169777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.169803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.169908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.169934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.170062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.170088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.170178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.170204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.170332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.170357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.170480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.170505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.170598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.170623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.170744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.170769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.170877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.170903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.171020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.171065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.171203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.171230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.171333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.171360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.171460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.171486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.171694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.171720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.171845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.171871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.171997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.172024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.172136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.172163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.172264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.172291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.575 [2024-07-25 19:07:32.172423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.575 [2024-07-25 19:07:32.172449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.575 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.172551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.172577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.172730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.172756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.172913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.172940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.173076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.173102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.173208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.173233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.173387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.173413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.173528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.173553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.173680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.173705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.173833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.173858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.173964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.173989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.174121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.174147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.174245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.174271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.174425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.174450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.174608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.174632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.174733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.174759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.174897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.174927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.175081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.175108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.175206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.175231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.175335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.175361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.175483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.175509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.175640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.175665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.175791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.175816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.175908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.175933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.176086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.176111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.176267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.176292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.176449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.176475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.176570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.176596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.176749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.176774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.176865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.176890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.177022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.177048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.177175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.177201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.177295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.177321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.177420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.177445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.177600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.177625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.177728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.177754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.177879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.177905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.576 [2024-07-25 19:07:32.178029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.576 [2024-07-25 19:07:32.178055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.576 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.178158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.178184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.178304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.178330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.178431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.178457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.178609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.178634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.178761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.178787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.178901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.178936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.179072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.179100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.179201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.179229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.179360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.179386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.179525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.179551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.179682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.179708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.179859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.179885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.179983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.180009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.180152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.180178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.180283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.180310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.180414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.180441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.180556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.180582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.180717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.180743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.180852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.180888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.181019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.181046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.181206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.181245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.181427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.181466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.181631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.181658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.181759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.181785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.181937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.181963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.182099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.182126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.182260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.182288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.182419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.182445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.182577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.182603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.577 qpair failed and we were unable to recover it. 00:34:20.577 [2024-07-25 19:07:32.182727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.577 [2024-07-25 19:07:32.182753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.182848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.182873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.183004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.183030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.183143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.183170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.183281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.183310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.183455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.183481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.183614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.183640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.183768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.183794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.183926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.183951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.184068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.184096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.184239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.184264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.184390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.184416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.184553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.184579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.184703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.184728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.184858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.184883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.184994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.185020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.185145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.185191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.185335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.185362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.185519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.185545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.185672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.185697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.185828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.185854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.185979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.186004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.186109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.186136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.186262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.186288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.186442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.186468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.186593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.186618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.186747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.186773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.186900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.186925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.187026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.187053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.187197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.187223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.187325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.187350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.187483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.187508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.187636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.187661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.187789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.187814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.187910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.187937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.188042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.188073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.188184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.188211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.578 qpair failed and we were unable to recover it. 00:34:20.578 [2024-07-25 19:07:32.188346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.578 [2024-07-25 19:07:32.188371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.188525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.188551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.188655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.188681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.188804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.188829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.188954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.188979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.189081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.189108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.189211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.189241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.189375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.189400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.189509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.189534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.189683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.189708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.189845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.189871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.190000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.190024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.190166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.190191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.190325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.190351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.190453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.190478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.190573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.190599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.190707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.190733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.190828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.190854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.190977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.191002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.191131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.191158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.191294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.191320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.191420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.191446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.191601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.191626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.191748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.191773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.191870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.191896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.191994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.192019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.192128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.192154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.192278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.192303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.192401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.192426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.192578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.192603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.192745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.192784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.192929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.192957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.193168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.193207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.193313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.193339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.193469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.193494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.193611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.193636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.193735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.579 [2024-07-25 19:07:32.193760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.579 qpair failed and we were unable to recover it. 00:34:20.579 [2024-07-25 19:07:32.193885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.193910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.194002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.194026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.194119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.194145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.194246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.194271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.194392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.194416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.194532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.194557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.194646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.194671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.194769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.194795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.194899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.194924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.195053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.195089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.195197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.195223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.195312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.195338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.195475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.195500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.195629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.195655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.195780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.195805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.195939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.195964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.196070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.196096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.196257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.196282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.196379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.196405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.196529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.196554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.196683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.196709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.196806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.196832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.196979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.197005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.197102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.197128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.197226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.197251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.197367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.197392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.197498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.197523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.197680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.197707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.197804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.197829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.197930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.197955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.198109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.198135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.198257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.198282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.198378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.198403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.198501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.198527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.198629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.198654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.198778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.198803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.198935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.580 [2024-07-25 19:07:32.198961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.580 qpair failed and we were unable to recover it. 00:34:20.580 [2024-07-25 19:07:32.199081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.199107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.199246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.199284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.199426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.199454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.199462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.581 [2024-07-25 19:07:32.199555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.199581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.199708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.199734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.199834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.199860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.199969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.200007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.200148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.200175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.200303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.200328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.200457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.200482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.200630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.200656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.200768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.200793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.200898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.200924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.201022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.201047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.201188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.201214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.201306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.201332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.201459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.201484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.201638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.201663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.201788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.201814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.201944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.201969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.202097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.202124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.202277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.202302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.202431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.202456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.202592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.202618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.202720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.202745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.202884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.202915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.203047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.203079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.203185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.203210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.203361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.203387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.203492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.203517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.203619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.203644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.203748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.203774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.203906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.203931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.204078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.204103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.204207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.204232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.581 [2024-07-25 19:07:32.204362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.581 [2024-07-25 19:07:32.204388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.581 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.204516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.204542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.204669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.204694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.204826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.204852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.204967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.205006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.205132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.205171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.205338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.205365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.205530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.205556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.205659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.205685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.205841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.205867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.205996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.206022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.206164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.206190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.206330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.206355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.206483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.206509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.206648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.206674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.206808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.206834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.206941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.206969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.207118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.207158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.207322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.207349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.207447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.207473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.207602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.207628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.207730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.207756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.207890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.207915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.208026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.208071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.208185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.208212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.208341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.208367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.208492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.208517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.208648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.208674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.208779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.208806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.208970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.208997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.209114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.209140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.209244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.209271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.582 qpair failed and we were unable to recover it. 00:34:20.582 [2024-07-25 19:07:32.209429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.582 [2024-07-25 19:07:32.209454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.209585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.209610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.209729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.209755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.209851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.209877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.210013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.210039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.210152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.210180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.210339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.210367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.210479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.210504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.210664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.210689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.210814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.210840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.210966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.210991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.211145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.211171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.211306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.211332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.211464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.211489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.211617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.211643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.211794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.211820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.211952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.211978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.212110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.212150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.212287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.212313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.212443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.212469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.212600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.212625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.212750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.212777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.212872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.212898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.213028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.213053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.213167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.213193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.213322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.213353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.213468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.213493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.213648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.213674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.213802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.213828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.213954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.213980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.214079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.214105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.214213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.214238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.214335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.214361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.214522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.214548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.214679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.214704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.214807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.214835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.583 [2024-07-25 19:07:32.214968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.583 [2024-07-25 19:07:32.214993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.583 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.215134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.215161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.215267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.215293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.215425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.215451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.215551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.215577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.215682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.215707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.215831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.215857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.215961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.215985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.216142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.216169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.216295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.216321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.216451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.216478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.216610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.216636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.216764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.216789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.216931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.216956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.217086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.217112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.217238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.217264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.217396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.217422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.217530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.217556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.217702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.217741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.217882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.217909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.218044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.218085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.218210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.218236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.218378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.218404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.218543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.218568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.218667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.218692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.218820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.218845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.218971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.218997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.219099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.219127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.219230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.219255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.219353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.219385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.219482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.219507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.219637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.219662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.219802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.219840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.219970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.219997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.220103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.220130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.220263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.220288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.220387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.220415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.220519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.220544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.584 [2024-07-25 19:07:32.220671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.584 [2024-07-25 19:07:32.220697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.584 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.220835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.220861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.220958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.220984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.221090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.221116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.221239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.221265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.221374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.221400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.221502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.221527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.221627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.221651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.221774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.221800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.221908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.221936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.222071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.222097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.222235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.222259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.222410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.222436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.222561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.222586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.222688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.222713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.222813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.222840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.222956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.222995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.223108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.223136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.223233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.223264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.223368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.223395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.223525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.223550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.223676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.223703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.223829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.223856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.223988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.224014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.224147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.224173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.224269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.224294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.224405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.224431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.224522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.224547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.224671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.224697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.224795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.224821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.224917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.224944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.225102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.225129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.225275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.225301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.225454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.225479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.225610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.225635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.225763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.225788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.225917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.225942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.585 qpair failed and we were unable to recover it. 00:34:20.585 [2024-07-25 19:07:32.226070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.585 [2024-07-25 19:07:32.226096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.226202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.226226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.226331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.226357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.226466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.226491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.226588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.226613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.226709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.226734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.226877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.226915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.227047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.227080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.227264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.227296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.227401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.227428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.227534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.227560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.227697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.227723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.227855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.227880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.228034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.228072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.228187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.228225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.228366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.228394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.228524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.228550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.228681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.228706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.228838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.228864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.228963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.228989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.229126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.229155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.229286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.229312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.229418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.229444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.229596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.229621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.229743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.229769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.229899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.229926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.230023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.230049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.230159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.230184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.230292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.230318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.230441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.230467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.230558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.230583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.230716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.230742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.230862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.230888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.231017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.231043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.231158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.231184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.231311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.231337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.231460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.231486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.586 qpair failed and we were unable to recover it. 00:34:20.586 [2024-07-25 19:07:32.231589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.586 [2024-07-25 19:07:32.231615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.231706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.231731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.231857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.231883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.231990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.232016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.232147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.232173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.232279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.232304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.232434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.232459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.232560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.232587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.232739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.232766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.232904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.232930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.233101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.233141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.233277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.233313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.233420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.233445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.233570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.233596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.233731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.233756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.233887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.233913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.234013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.234038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.234184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.234222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.234331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.234357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.234487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.234514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.234643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.234668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.234777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.234802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.234894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.234920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.235049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.235089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.235201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.235228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.235369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.235395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.235500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.235526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.235667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.235693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.235798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.235825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.235951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.235979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.587 qpair failed and we were unable to recover it. 00:34:20.587 [2024-07-25 19:07:32.236143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.587 [2024-07-25 19:07:32.236169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.236293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.236319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.236413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.236439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.236540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.236566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.236700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.236725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.236858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.236885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.237019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.237044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.237160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.237186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.237279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.237309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.237436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.237462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.237559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.237585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.237749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.237776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.237875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.237900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.237996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.238021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.238160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.238187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.238321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.238347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.238452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.238477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.238576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.238601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.238707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.238732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.238872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.238911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.239019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.239047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.239191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.239217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.239351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.239377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.239508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.239534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.239676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.239702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.239836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.239864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.239998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.240027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.240174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.240201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.240304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.240331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.240457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.240483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.240617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.240643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.240740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.240765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.240865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.240891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.241024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.241049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.241184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.241211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.241345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.241372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.588 [2024-07-25 19:07:32.241504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.588 [2024-07-25 19:07:32.241530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.588 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.241660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.241686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.241805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.241831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.241925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.241951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.242055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.242100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.242262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.242288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.242388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.242414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.242519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.242544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.242647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.242672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.242800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.242826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.242954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.242980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.243082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.243108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.243225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.243250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.243357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.243384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.243545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.243571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.243670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.243696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.243806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.243833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.243926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.243951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.244048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.244086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.244181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.244206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.244361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.244387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.244517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.244542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.244632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.244657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.244794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.244832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.244971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.244999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.245129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.245157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.245251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.245278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.245377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.245403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.245499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.245524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.245652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.245677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.245802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.245827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.245930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.245955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.246064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.246091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.246227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.246252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.246374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.246399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.246494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.246520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.246611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.246636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.246735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.589 [2024-07-25 19:07:32.246761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.589 qpair failed and we were unable to recover it. 00:34:20.589 [2024-07-25 19:07:32.246878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.246902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.247027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.247053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.247195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.247221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.247329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.247354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.247445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.247470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.247566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.247594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.247706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.247733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.247869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.247896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.248017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.248042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.248147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.248173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.248298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.248324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.248459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.248486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.248617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.248642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.248748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.248772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.248864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.248888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.249012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.249056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.249203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.249231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.249333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.249360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.249478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.249504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.249602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.249627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.249758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.249785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.249894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.249920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.250053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.250083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.250183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.250207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.250357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.250382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.250484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.250510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.250638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.250666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.250801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.250827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.250955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.250981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.251092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.251119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.251238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.251264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.251362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.251387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.251549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.251575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.251698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.251723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.251828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.251855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.251962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.590 [2024-07-25 19:07:32.251988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.590 qpair failed and we were unable to recover it. 00:34:20.590 [2024-07-25 19:07:32.252114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.252139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.252267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.252293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.252387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.252412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.252541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.252566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.252700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.252726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.252877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.252902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.253041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.253087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.253200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.253227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.253356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.253382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.253511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.253537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.253672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.253698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.253824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.253850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.253985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.254010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.254119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.254145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.254269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.254294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.254421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.254446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.254570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.254595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.254686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.254711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.254828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.254867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.254970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.255001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.255139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.255168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.255293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.255319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.255424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.255450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.255606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.255633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.255760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.255787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.255883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.255909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.256002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.256028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.256202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.256228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.256385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.256410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.256518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.256545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.256655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.256682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.256815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.256841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.256968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.256996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.257122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.257161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.257269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.257296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.591 qpair failed and we were unable to recover it. 00:34:20.591 [2024-07-25 19:07:32.257419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.591 [2024-07-25 19:07:32.257445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.257575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.257600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.257723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.257749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.257888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.257913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.258070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.258109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.258246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.258272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.258399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.258424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.258540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.258565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.258661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.258685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.258815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.258840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.258947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.258971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.259103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.259135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.259259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.259284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.259385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.259409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.259509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.259534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.259661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.259685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.259843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.259868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.260003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.260027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.260150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.260176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.260312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.260342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.260499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.260524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.260626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.260652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.260784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.260809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.260937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.260962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.261056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.261091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.261200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.261224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.261325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.261349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.261478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.261504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.261639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.261664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.261768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.261792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.261925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.261950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.262093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.262131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.262267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.592 [2024-07-25 19:07:32.262294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.592 qpair failed and we were unable to recover it. 00:34:20.592 [2024-07-25 19:07:32.262419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.262445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.262551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.262576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.262703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.262729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.262858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.262883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.262981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.263007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.263118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.263149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.263252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.263277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.263379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.263404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.263503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.263529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.263649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.263674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.263795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.263819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.263925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.263950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.264078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.264104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.264205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.264230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.264327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.264352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.264447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.264471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.264569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.264593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.264748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.264774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.264868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.264893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.265011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.265050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.265165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.265193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.265354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.265380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.265509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.265535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.265636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.265664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.265776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.265815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.265917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.265943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.266047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.266078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.266200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.266226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.266328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.266352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.266481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.266506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.266602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.266627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.266796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.266825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.266962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.266990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.267101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.267129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.267235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.267260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.267358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.267385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.267514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.593 [2024-07-25 19:07:32.267540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.593 qpair failed and we were unable to recover it. 00:34:20.593 [2024-07-25 19:07:32.267649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.267676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.267774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.267799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.267928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.267953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.268106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.268132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.268230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.268255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.268353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.268378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.268506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.268531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.268636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.268662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.268798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.268826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.268961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.268988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.269143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.269169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.269288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.269313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.269410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.269436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.269561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.269587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.269687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.269714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.269808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.269833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.269986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.270011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.270109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.270134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.270252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.270277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.270400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.270425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.270516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.270541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.270632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.270656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.270783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.270807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.270936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.270961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.271096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.271125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.271262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.271289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.271417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.271443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.271541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.271568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.271697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.271722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.271824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.271851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.272014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.272041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.272188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.272213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.272337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.272361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.272493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.272518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.272620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.272644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.272772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.272797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.594 [2024-07-25 19:07:32.272942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.594 [2024-07-25 19:07:32.272981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.594 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.273118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.273145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.273248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.273274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.273395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.273421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.273554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.273581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.273690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.273718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.273827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.273854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.273961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.273986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.274091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.274116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.274220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.274244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.274333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.274358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.274463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.274488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.274589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.274617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.274723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.274754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.274859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.274886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.275041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.275072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.275182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.275207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.275304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.275329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.275426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.275452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.275548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.275573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.275673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.275699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.275799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.275823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.275920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.275945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.276103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.276128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.276255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.276279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.276413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.276438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.276559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.276583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.276689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.276714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.276852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.276876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.277004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.277031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.277146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.277173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.277279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.277305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.277424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.277451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.277581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.277607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.277736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.277762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.277915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.277940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.278084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.595 [2024-07-25 19:07:32.278110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.595 qpair failed and we were unable to recover it. 00:34:20.595 [2024-07-25 19:07:32.278233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.278259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.278390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.278415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.278512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.278536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.278635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.278664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.278773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.278812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.278944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.278970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.279106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.279133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.279234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.279260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.279389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.279415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.279543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.279569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.279707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.279734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.279864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.279889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.280017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.280041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.280149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.280175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.280272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.280297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.280400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.280425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.280525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.280552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.280657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.280683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.280812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.280838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.280940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.280966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.281094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.281120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.281218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.281244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.281402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.281427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.281525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.281551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.281658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.281685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.281809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.281834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.281930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.281956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.282090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.282116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.282244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.282270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.282403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.282428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.282527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.282561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.596 [2024-07-25 19:07:32.282662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.596 [2024-07-25 19:07:32.282687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.596 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.282810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.282835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.282973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.283000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.283171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.283210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.283308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.283334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.283450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.283475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.283571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.283595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.283698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.283738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.283848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.283875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.284030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.284055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.284187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.284213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.284316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.284341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.284450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.284475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.284634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.284658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.284761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.284785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.284927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.284967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.285117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.285155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.285258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.285284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.285414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.285439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.285543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.285568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.285693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.285717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.285820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.285847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.285941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.285967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.286088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.286115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.286235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.286260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.286354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.286379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.286515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.286540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.286668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.286694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.286818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.286844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.286972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.286998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.287106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.287133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.287242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.287281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.287391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.287419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.287598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.287624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.287722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.287748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.597 [2024-07-25 19:07:32.287877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.597 [2024-07-25 19:07:32.287902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.597 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.287997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.288022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.288177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.288216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.288418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.288445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.288545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.288569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.288708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.288733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.288873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.288898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.289029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.289053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.289155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.289180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.289272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.289296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.289398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.289425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.289549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.289573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.289702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.289727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.289831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.289855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.289967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.290005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.290121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.290148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.290291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.290329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.290458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.290486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.290668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.290710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.290844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.290872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.291007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.291034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.291173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.291199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.291332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.291356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.291455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.291480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.291584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.291610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.291705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.291730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.291861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.291885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.291994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.292020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.292152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.292178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.292307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.292331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.292464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.292488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.292589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.292614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.292755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.292784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.292948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.292975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.293088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.293116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.293272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.293298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.293401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.293427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.598 [2024-07-25 19:07:32.293556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.598 [2024-07-25 19:07:32.293582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.598 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.293679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.293705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.293832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.293856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.293959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.293983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.294089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.294115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.294209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.294233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.294325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.294350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.294449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.294474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.294579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.294604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.294698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.294724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.294827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.294851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.294955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.294980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.295089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.295128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.295228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.295255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.295350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.295376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.295503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.295503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.599 [2024-07-25 19:07:32.295529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 [2024-07-25 19:07:32.295534] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.295549] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.599 [2024-07-25 19:07:32.295561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.599 [2024-07-25 19:07:32.295571] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.599 [2024-07-25 19:07:32.295685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.295710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 [2024-07-25 19:07:32.295658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.295686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:20.599 [2024-07-25 19:07:32.295818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.295711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:20.599 [2024-07-25 19:07:32.295844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 [2024-07-25 19:07:32.295714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.295975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.296006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.296107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.296136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.296272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.296299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.296410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.296436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.296534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.296560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.296725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.296751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.296858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.296885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.296986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.297011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.297123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.297149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.297249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.297277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.297370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.297396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.297494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.297519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.297657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.297683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.297784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.297812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.297925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.297951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.298066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.298093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.298206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.298232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.298337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.298363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.298492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.599 [2024-07-25 19:07:32.298519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.599 qpair failed and we were unable to recover it. 00:34:20.599 [2024-07-25 19:07:32.298658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.298685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.298789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.298814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.298958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.298983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.299111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.299138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.299285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.299311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.299407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.299433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.299576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.299601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.299729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.299754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.299841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.299875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.299983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.300009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.300120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.300146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.300256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.300283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.300391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.300417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.300541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.300567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.300692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.300717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.300807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.300833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.300945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.300970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.301064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.301090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.301241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.301267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.301365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.301391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.301495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.301521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.301641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.301668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.301778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.301804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.301913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.301938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.302041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.302073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.302171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.302197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.302324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.302350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.302449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.302475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.302603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.302629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.302733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.302773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.302876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.302903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.303037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.303069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.303166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.303191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.303294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.303320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.303417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.303442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.303528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.303559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.303659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.303689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.303823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.303851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.303952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.303980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.304083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.304110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.304209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.600 [2024-07-25 19:07:32.304235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.600 qpair failed and we were unable to recover it. 00:34:20.600 [2024-07-25 19:07:32.304355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.304380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.304502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.304527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.304631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.304657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.304763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.304790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.304889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.304916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.305027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.305053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.305156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.305181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.305308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.305334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.305438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.305464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.305563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.305588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.305686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.305714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.305811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.305837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.305966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.305993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.306120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.306146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.306247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.306273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.306423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.306448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.306546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.306572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.306666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.306691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.306795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.306823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.306924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.306951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.307069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.307095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.307189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.307215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.307332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.307357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.307451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.307476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.307567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.307592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.307686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.307712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.307807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.307833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.307949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.307974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.308073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.308100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.308204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.308230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.308333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.308359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.308461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.308487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.308609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.308635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.308722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.308747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.308841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.601 [2024-07-25 19:07:32.308866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.601 qpair failed and we were unable to recover it. 00:34:20.601 [2024-07-25 19:07:32.308962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.308987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.309089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.309115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.309240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.309266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.309368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.309393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.309489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.309515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.309610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.309635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.309722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.309748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.309844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.309869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.310014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.310053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.310176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.310204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.310323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.310349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.310452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.310479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.310578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.310603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.310763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.310795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.310889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.310915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.311006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.311031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.311171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.311200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.311308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.311335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.311449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.311476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.311572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.311598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.311704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.311731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.311822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.311848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.311949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.311974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.312066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.312092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.312193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.312218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.312311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.312337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.312445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.312472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.312585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.312612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.312699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.312725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.602 qpair failed and we were unable to recover it. 00:34:20.602 [2024-07-25 19:07:32.312819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.602 [2024-07-25 19:07:32.312845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.312947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.312975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.313083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.313109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.313239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.313265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.313392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.313419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.313509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.313534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.313660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.313685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.313793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.313819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.313926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.313965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.314126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.314153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.314260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.314286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.314385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.314411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.314505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.314531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.314635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.314660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.314779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.314805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.314904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.314929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.315025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.315051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.315151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.315176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.315275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.315301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.315398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.315423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.315521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.315548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.315646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.315675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.315782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.315808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.315899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.315925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.316022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.316048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.316165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.316191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.316301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.316327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.316449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.316475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.316568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.316594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.316696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.316722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.316825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.316850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.316955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.316982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.317079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.317105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.317215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.317241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.317337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.317362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.317484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.317509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.317638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.317666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.317776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.317802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.317901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.317929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.318050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.318091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.318192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.318219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.318322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.318348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.318472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.603 [2024-07-25 19:07:32.318497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.603 qpair failed and we were unable to recover it. 00:34:20.603 [2024-07-25 19:07:32.318624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.318651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.318779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.318806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.318904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.318931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.319035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.319074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.319172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.319198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.319303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.319328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.319424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.319449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.319540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.319566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.319661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.319686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.319782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.319807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.319935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.319960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.320083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.320111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.320217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.320244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.320341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.320366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.320492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.320517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.320724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.320750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.320848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.320874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.321002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.321028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.321135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.321161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.321258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.321283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.321376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.321402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.321502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.321528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.321625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.321653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.321786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.321813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.321907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.321933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.322066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.322094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.322203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.322229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.322329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.322355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.322480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.322506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.322606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.322632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.322759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.322785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.322892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.322919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.323012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.323040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.323152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.323178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.323285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.323312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.323407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.323437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.323540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.323565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.323695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.323721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.323818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.323843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.323942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.323969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.324069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.324096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.324198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.324224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.324335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.324361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.324496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.324522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.324648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.324674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.324775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.604 [2024-07-25 19:07:32.324801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.604 qpair failed and we were unable to recover it. 00:34:20.604 [2024-07-25 19:07:32.324897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.324925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.325036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.325083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.325251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.325278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.325389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.325415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.325513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.325540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.325666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.325691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.325784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.325809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.325952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.325991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.326099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.326128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.326270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.326297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.326426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.326452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.326561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.326589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.326691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.326718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.326819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.326845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.326970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.326995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.327089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.327115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.327212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.327242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.327399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.327425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.327531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.327559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.327657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.327683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.327814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.327841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.327935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.327961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.328099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.328138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.328271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.328298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.328396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.328422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.328520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.328546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.328656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.328680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.328806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.328831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.328925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.328951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.329065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.329091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.329218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.329244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.329353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.329379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.329510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.329536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.329659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.329684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.329811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.329837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.329933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.329959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.330045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.330078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.330176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.330201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.330300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.330326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.330469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.605 [2024-07-25 19:07:32.330495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.605 qpair failed and we were unable to recover it. 00:34:20.605 [2024-07-25 19:07:32.330586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.330612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.330717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.330742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.330847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.330871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.330964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.330995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.331162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.331189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.331292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.331318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.331411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.331436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.331536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.331562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.331650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.331675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.331772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.331798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.331929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.331954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.332066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.332105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.332252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.332291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.332403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.332431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.332555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.332581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.332679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.332706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.332814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.332840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.332942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.332968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.333081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.333110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.333240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.333267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.333393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.333419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.333520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.333546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.333643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.333669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.333759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.333786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.333937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.333963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.334102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.334128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.334215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.334241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.334347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.334372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.334466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.334492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.334593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.334618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.334748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.334779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.334908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.334934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.335078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.335105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.335206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.335232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.335367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.335394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.335482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.335508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.335604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.335630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.335721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.335747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.335846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.335873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.336000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.336025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.336141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.336167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.336262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.336287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.336373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.336398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.336491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.336517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.336624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.336649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.606 [2024-07-25 19:07:32.336742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.606 [2024-07-25 19:07:32.336767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.606 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.336870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.336896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.337024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.337051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.337165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.337204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.337321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.337348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.337479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.337505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.337601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.337627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.337726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.337753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.337877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.337903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.338002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.338028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.338134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.338160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.338259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.338284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.338375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.338405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.338505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.338531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.338648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.338673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.338796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.338822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.338920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.338946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.339040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.339073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.339174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.339201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.339311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.339336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.339431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.339457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.339583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.339608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.339707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.339733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.339862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.339888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.339988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.340015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.340121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.340147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.340251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.340281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.340413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.340439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.340543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.340570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.340667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.340694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.340822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.340848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.340946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.340971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.341069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.341095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.341193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.341221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.341316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.341342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.341459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.341485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.341610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.341636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.341739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.341765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.341888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.341915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.342033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.342080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.342190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.342217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.342345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.342371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.342478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.342504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.342611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.342637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.342728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.342753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.342843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.342868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.342990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.343015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.607 [2024-07-25 19:07:32.343128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.607 [2024-07-25 19:07:32.343155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.607 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.343305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.343330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.343425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.343450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.343541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.343567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.343661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.343689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.343787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.343813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.343917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.343943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.344069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.344096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.344229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.344255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.344351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.344377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.344470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.344496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.344599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.344624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.344754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.344782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.344882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.344908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.345008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.345035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.345177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.345203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.345327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.345353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.345451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.345477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.345571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.345597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.345738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.345765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.345868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.345895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.345994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.346020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.346120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.346146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.346279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.346305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.346427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.346453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.346558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.346586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.346699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.346725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.346846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.346872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.346970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.346997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.347090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.347116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.347207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.347233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.347361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.347387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.347593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.347623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.347719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.347745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.347872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.347899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.347991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.348017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.348160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.348199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.348325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.348351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.348498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.348524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.348646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.348672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.348774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.348801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.348933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.348959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.349049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.349080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.349208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.349234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.349331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.349357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.349456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.349482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.349610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.349636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.349733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.608 [2024-07-25 19:07:32.349758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.608 qpair failed and we were unable to recover it. 00:34:20.608 [2024-07-25 19:07:32.349849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.349876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.349971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.349997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.350156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.350183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.350280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.350306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.350511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.350537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.350633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.350659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.350752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.350778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.350898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.350924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.351017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.351043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.351149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.351176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.351318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.351356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.351467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.351499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.351630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.351657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.351778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.351803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.351904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.351929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.352025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.352052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.352191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.352218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.352325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.352351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.352440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.352465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.352669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.352694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.352791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.352817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.352910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.352936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.353146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.353172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.353268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.353294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.353385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.353411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.353547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.353572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.353673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.353699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.353787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.353812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.353908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.353934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.354030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.354055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.354180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.354206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.354313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.354338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.354542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.354567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.354671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.354698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.354796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.354823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.354921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.354947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.355045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.355080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.355205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.355231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.355328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.355358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.355464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.355491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.355597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.355636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.609 qpair failed and we were unable to recover it. 00:34:20.609 [2024-07-25 19:07:32.355741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.609 [2024-07-25 19:07:32.355768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.355866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.355892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.355994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.356019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.356155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.356181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.356305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.356331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.356425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.356450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.356548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.356573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.356663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.356688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.356785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.356810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.356919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.356945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.357074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.357101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.357208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.357234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.357333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.357358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.357461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.357489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.357628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.357656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.357786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.357812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.357938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.357964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.358089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.358116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.358224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.358251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.358377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.358403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.358522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.358548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.358758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.358783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.358903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.358928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.359028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.359054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.359177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.359207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.359337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.359362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.359481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.359506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.359605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.359631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.359716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.359741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.359871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.359910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.360027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.360054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.360302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.360329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.360425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.360452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.360575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.360601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.360703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.360729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.360824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.360850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.361070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.361096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.361194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.361219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.361321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.361346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.361447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.361472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.361563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.361588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.361678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.361703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.361840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.361865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.361973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.361998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.362091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.362118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.362247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.362273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.362366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.362391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.610 qpair failed and we were unable to recover it. 00:34:20.610 [2024-07-25 19:07:32.362476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.610 [2024-07-25 19:07:32.362502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.362609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.362634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.362737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.362762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.362865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.362905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.363003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.363036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.363194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.363232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.363364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.363391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.363494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.363521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.363618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.363645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.363770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.363796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.363900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.363925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.364053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.364084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.364205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.364230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.364358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.364383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.364484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.364510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.364628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.364653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.364794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.364819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.364924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.364963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.365078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.365106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.365208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.365234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.365364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.365390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.365486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.365512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.365639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.365664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.365771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.365797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.365892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.365917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.366045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.366078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.366176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.366202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.366295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.366320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.366415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.366440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.366534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.366559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.366659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.366684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.366776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.366806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.366901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.366927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.367049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.367082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.367218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.367244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.367341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.367368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.367466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.367492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.367587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.367612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.367716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.367741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.367853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.367893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.368002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.368031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.368139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.368166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.368261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.368287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.368448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.368474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.368568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.368595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.368697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.368722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.368849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.611 [2024-07-25 19:07:32.368875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.611 qpair failed and we were unable to recover it. 00:34:20.611 [2024-07-25 19:07:32.368964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.368990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.369100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.369126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.369228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.369254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.369381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.369407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.369504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.369529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.369656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.369681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.369787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.369812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.369904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.369929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.370027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.370053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.370151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.370177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.370278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.370305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.370410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.370442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.370574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.370600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.370720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.370746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.370888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.370913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.371013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.371039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.371158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.371185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.371288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.371314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.371457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.371483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.371572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.371598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.371726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.371751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.371840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.371865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.371958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.371984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.372124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.372150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.372252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.372277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.372386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.372413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.372511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.372536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.372660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.372685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.372782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.372810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.372957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.372983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.373103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.373129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.373253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.373279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.373374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.373399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.373496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.373522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.373644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.373670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.373772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.373798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.373898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.373924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.374026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.374053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.374170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.374200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.374293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.374318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.374451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.374477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.374580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.374605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.374697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.374723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.374874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.374901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.374999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.375025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.375142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.375169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.375297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.375322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.612 [2024-07-25 19:07:32.375423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.612 [2024-07-25 19:07:32.375448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.612 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.375550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.375576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.375705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.375732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.375831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.375856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.375982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.376007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.376114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.376141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.376282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.376308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.376408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.376434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.376556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.376582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.376689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.376714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.376805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.376831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.376923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.376949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.377043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.377076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.377175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.377200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.377297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.377322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.377418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.377444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.377575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.377601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.377695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.377720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.377849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.377879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.377995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.378021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.378124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.378153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.378265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.378291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.378420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.378446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.378573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.378599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.378732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.378759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.378884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.378910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.379010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.379036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.379143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.379169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.379258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.379284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.379378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.379403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.379528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.379553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.379652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.379678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.379773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.379800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.379931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.379957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.380049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.380081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.380207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.380233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.380331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.380356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.380494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.380520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.380639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.380665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.380765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.380790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.380885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.380911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.381030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.381056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.613 qpair failed and we were unable to recover it. 00:34:20.613 [2024-07-25 19:07:32.381199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.613 [2024-07-25 19:07:32.381225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.381320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.381346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.381433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.381458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.381556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.381585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.381685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.381712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.381799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.381824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.381931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.381956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.382051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.382084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.382213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.382239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.382336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.382361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.382451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.382476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.382579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.382604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.382703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.382728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.382848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.382874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.382983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.383011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.383113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.383139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.383294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.383320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.383453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.383478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.383575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.383601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.383740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.383765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.383889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.383915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.384013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.384038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.384137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.384163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.384253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.384279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.384407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.384432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.384524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.384549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.384645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.384670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.384798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.384824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.384914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.384939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.385042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.385074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.385175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.385210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.385308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.385334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.385463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.385489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.385593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.385618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.385717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.385743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.385843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.385870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.386085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.386111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.386201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.386227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.386329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.386354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.386484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.386509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.386623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.386648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.386745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.386770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.386869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.386894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.386983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.387009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.387108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.387134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.387236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.387263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.387392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.387417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.387540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.614 [2024-07-25 19:07:32.387565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.614 qpair failed and we were unable to recover it. 00:34:20.614 [2024-07-25 19:07:32.387662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.387688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.387783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.387809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.387902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.387928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.388036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.388068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.388162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.388188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.388291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.388316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.388417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.388443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.388542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.388568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.388675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.388700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.388788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.388817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.388942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.388968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.389056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.389096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.389225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.389252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.389344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.389369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.389493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.389519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.389620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.389646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.389744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.389770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.389866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.389891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.389985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.390010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.390104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.390131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.390226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.390251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.390350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.390375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.390473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.390498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.390610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.390650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.390761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.390788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.390912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.390938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.391033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.391063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.391165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.391192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.391287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.391312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.391412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.391438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.391652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.391677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.391877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.391903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.392027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.392053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.392161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.392186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.392388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.392413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.392534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.392560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.392718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.392744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.392839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.392864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.392965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.392993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.393097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.393123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.393251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.393277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.393405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.393432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.393536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.393561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.393663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.393688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.393778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.393803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.393955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.393980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.394135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.394161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.615 [2024-07-25 19:07:32.394257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.615 [2024-07-25 19:07:32.394284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.615 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.394383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.394409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.394533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.394557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.394686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.394711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.394810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.394835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.394935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.394960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.395069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.395096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.395198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.395224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.395318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.395343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.395444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.395469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.395602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.395627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.395721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.395746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.395844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.395870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.395961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.395986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.396108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.396135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.396235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.396261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.396387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.396419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.396557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.396582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.396674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.396699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.396829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.396855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.396949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.396974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.397175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.397201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.397299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.397325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.397448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.397474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.397605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.397630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.397729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.397755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.397853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.397879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.397978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.398004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.398102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.398128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.398225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.398251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.398360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.398398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.398535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.398561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.398693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.398719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.398825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.398852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.398954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.398980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.399076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.399102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.399198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.399224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.399316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.399342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.399441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.399468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.399568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.399593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.399737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.399762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.616 [2024-07-25 19:07:32.399863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.616 [2024-07-25 19:07:32.399888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.616 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.400013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.400040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.400163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.400193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.400287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.400313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.400421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.400447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.400547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.400572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.400696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.400721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.400844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.400869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.400972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.400997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.401099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.401125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.401221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.401247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.401371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.401396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.401498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.401523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.401643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.401671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.401768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.401794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.401887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.401913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.402038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.402069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.402179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.402204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.402298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.402323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.402411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.402436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.402569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.402594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.402723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.402748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.402844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.402869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.402962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.402987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.403087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.403113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.403216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.403241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.403336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.403361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.403487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.403517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.403610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.403636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.403763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.403788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.617 [2024-07-25 19:07:32.403918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.617 [2024-07-25 19:07:32.403944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.617 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.404044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.404075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.404177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.404204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.404305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.404331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.404455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.404482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.404580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.404612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.404701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.404727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.404836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.404864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.404995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.405021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.405125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.405153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.405259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.405285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.405384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.405410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.405507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.405532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.405639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.405666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.405763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.405789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.405906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.405932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.406018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.406043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.406179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.406204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.406293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.406318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.406476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.406503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.406599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.406625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.406719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.406744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.406847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.406873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.406971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.406996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.407124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.886 [2024-07-25 19:07:32.407150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.886 qpair failed and we were unable to recover it. 00:34:20.886 [2024-07-25 19:07:32.407251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.407277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.407377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.407402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.407533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.407558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.407656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.407682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.407788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.407814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.407908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.407933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.408036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.408068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.408172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.408199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.408297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.408324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.408417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.408443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.408542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.408568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.408688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.408714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.408843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.408869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.408980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.409006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.409108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.409134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.409238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.409265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.409367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.409393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.409488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.409513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.409598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.409624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.409731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.409756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.409852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.409877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.409975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.410001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.410114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.410140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.410231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.410256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.410390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.410414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.410509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.410534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.410735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.410761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.410856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.410881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.411011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.411036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.411141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.411167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.411258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.411283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.411375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.411401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.411490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.411516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.887 [2024-07-25 19:07:32.411622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.887 [2024-07-25 19:07:32.411647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.887 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.411745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.411770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.411864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.411889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.411984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.412113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.412243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.412364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.412484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.412604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.412721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.412847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.412962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.412988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.413083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.413109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.413208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.413233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.413329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.413354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.413472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.413498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.413596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.413621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.413728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.413753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.413868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.413906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.414012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.414039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.414156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.414182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.414276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.414301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.414430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.414455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.414558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.414584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.414703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.414729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.414831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.414858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.414950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.414975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.415104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.415131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.415225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.415250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.415348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.415374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.415493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.415518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.415609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.415634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.415718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.415744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.415834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.415859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.415983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.888 [2024-07-25 19:07:32.416009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.888 qpair failed and we were unable to recover it. 00:34:20.888 [2024-07-25 19:07:32.416105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.416131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.416219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.416248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.416361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.416386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.416483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.416509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.416613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.416638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.416734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.416760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.416848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.416873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.416973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.416998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.417102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.417129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.417221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.417246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.417371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.417399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.417533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.417560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.417667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.417693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.417803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.417829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.417931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.417958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.418093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.418120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.418243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.418269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.418398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.418424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.418525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.418550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.418678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.418703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.418804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.418831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.418924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.418950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.419079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.419105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.419207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.419232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.419338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.419363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.419469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.419496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.419594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.419620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.419721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.419747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.419876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.419905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.420003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.420028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.420152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.420178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.420269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.420296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.420434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.420461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.420585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.420612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.420705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.420731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.889 [2024-07-25 19:07:32.420858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.889 [2024-07-25 19:07:32.420884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.889 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.420986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.421013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.421119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.421146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.421272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.421298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.421394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.421420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.421549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.421575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.421680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.421706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.421806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.421831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.421966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.422005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.422148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.422175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.422273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.422299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:20.890 [2024-07-25 19:07:32.422432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.422458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:20.890 [2024-07-25 19:07:32.422555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.422580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:20.890 [2024-07-25 19:07:32.422679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.422704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:20.890 [2024-07-25 19:07:32.422808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.422834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.890 [2024-07-25 19:07:32.422934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.422959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.423050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.423082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.423208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.423234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.423330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.423361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.423470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.423496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.423612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.423637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.423735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.423761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.423887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.423913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.424040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.424073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.424212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.424239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.424341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.424372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.424473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.424499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.424599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.424625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.424775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.424801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.424897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.424923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.425029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.425056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.425175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.425202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.425335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.425361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.425459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.425485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.890 qpair failed and we were unable to recover it. 00:34:20.890 [2024-07-25 19:07:32.425572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.890 [2024-07-25 19:07:32.425598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.425696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.425728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.425827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.425853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.425997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.426022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.426147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.426173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.426268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.426294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.426394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.426420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.426506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.426533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.426699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.426725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.426847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.426874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.426973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.426999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.427106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.427133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.427262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.427288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.427387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.427414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.427510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.427536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.427632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.427658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.427749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.427775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.427867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.427893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.427993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.428020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.428138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.428166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.428267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.428294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.428388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.428414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.428536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.428562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.428663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.428688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.428787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.428817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.428972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.428997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.891 [2024-07-25 19:07:32.429092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.891 [2024-07-25 19:07:32.429119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.891 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.429255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.429280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.429376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.429404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.429505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.429531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.429653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.429679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.429846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.429871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.429972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.430002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.430103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.430129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.430230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.430255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.430410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.430435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.430540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.430565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.430690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.430716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.430818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.430843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.430945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.430971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.431065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.431091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.431213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.431239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.431327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.431352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.431445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.431470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.431570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.431597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.431733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.431758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.431885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.431911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.432001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.432027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.432134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.432160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.432298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.432325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.432425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.432450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.432561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.432588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.432678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.432708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.432862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.432888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.432974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.433000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.433110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.433149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.433250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.433278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.433382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.433408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.433506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.433532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.433685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.892 [2024-07-25 19:07:32.433711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.892 qpair failed and we were unable to recover it. 00:34:20.892 [2024-07-25 19:07:32.433804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.433830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.433932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.433957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.434050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.434083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.434192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.434218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.434315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.434341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.434502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.434527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.434623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.434648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.434747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.434775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.434873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.434899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.434994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.435022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.435164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.435190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.435293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.435319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.435415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.435441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.435568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.435594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.435739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.435765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.435862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.435889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.435998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.436025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.436140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.436168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.436272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.436298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.436393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.436419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.436543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.436569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.436657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.436683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.436780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.436806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.436894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.436919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.437021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.437048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.437177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.437204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.437332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.437358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.437456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.437482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.437607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.437635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.437771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.437797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.437898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.437925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.438142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.438168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.438273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.438299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.438499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.438525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.438735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.438761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.438863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.893 [2024-07-25 19:07:32.438888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.893 qpair failed and we were unable to recover it. 00:34:20.893 [2024-07-25 19:07:32.438985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.439011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.439111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.439138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.439239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.439264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.439428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.439454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.439548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.439574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.439710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.439736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.439861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.439887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.439981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.440107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.440233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.440358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.440483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.440608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.440728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.440847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.440964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.440990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.441103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.441130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.441249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.441274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.441398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.441424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.441529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.441554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.441683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.441708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.441827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.441852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.441947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.441973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.442098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.442124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.442221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.442249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.442350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.442376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.442465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.442490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.442640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.442665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.442756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.442793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.442890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.442916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.443020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.443045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.443162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.443188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.443282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.443317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.894 [2024-07-25 19:07:32.443405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.894 [2024-07-25 19:07:32.443430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.894 qpair failed and we were unable to recover it. 00:34:20.895 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.895 [2024-07-25 19:07:32.443557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.443584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.443679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.443705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.443866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.443892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.443989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.444016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.895 [2024-07-25 19:07:32.444147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.444173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.444272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.444298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.444395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.444422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.444517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.444543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.444643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.444669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.444765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.444790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.444994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.445020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.445133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.445160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.445263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.445288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.445419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.445444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.445569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.445610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.445723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.445762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.445908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.445936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.446050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.446084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.446186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.446212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.446313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.446340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.446471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.446497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.446593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.446618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.446718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.446743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.446838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.446863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.446959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.446985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.447082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.447108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.447235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.447263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.447401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.447430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.447542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.447569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.447668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.895 [2024-07-25 19:07:32.447694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.895 qpair failed and we were unable to recover it. 00:34:20.895 [2024-07-25 19:07:32.447793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.447820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.447919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.447946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.448043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.448075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.448294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.448320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.448438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.448464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.448555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.448581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.448710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.448736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.448863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.448888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.448974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.448999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.449107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.449134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.449230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.449255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.449382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.449411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.449511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.449537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.449667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.449693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.449824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.449851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.449946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.449973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.450080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.450107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.450230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.450257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.450379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.450405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.450529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.450555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.450651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.450679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.450789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.450828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.450968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.450995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.451116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.451143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.451239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.451270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.451446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.451471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.451594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.451621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.451747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.451773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.451900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.896 [2024-07-25 19:07:32.451926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.896 qpair failed and we were unable to recover it. 00:34:20.896 [2024-07-25 19:07:32.452137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.452163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.452259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.452284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.452417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.452442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.452562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.452588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.452677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.452703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.452825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.452850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.452948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.452974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.453091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.453118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.453213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.453238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.453378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.453407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.453503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.453530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.453663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.453690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.453788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.453815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.453936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.453962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.454082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.454110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.454207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.454234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.454330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.454356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.454453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.454480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.454579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.454605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.454727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.454753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.454851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.454877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.455091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.455119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.455223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.455253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.455367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.455392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.455507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.455532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.455617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.455643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.455767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.455794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.455887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.455913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.456011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.456038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.456175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.456201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.456300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.456326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.456424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.897 [2024-07-25 19:07:32.456449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.897 qpair failed and we were unable to recover it. 00:34:20.897 [2024-07-25 19:07:32.456554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.456580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.456707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.456733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.456858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.456883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.456981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.457006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.457152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.457178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.457277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.457302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.457400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.457425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.457513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.457538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.457671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.457696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.457805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.457843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.457950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.457990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.458124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.458152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.458254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.458280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.458375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.458400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.458490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.458516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.458609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.458634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.458753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.458779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5100000b90 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.458877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.458909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.459039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.459070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.459179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.459205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.459299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.459333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.459461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.459486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.459586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.459612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.459822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.459848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.459946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.459972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.460070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.460103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.460196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.460221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.460313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.460339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.460435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.460461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.460588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.460613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.460733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.460759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.460860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.460886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.460979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.461004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.461098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.461127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.461219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.461245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.461382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.898 [2024-07-25 19:07:32.461407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.898 qpair failed and we were unable to recover it. 00:34:20.898 [2024-07-25 19:07:32.461530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.461556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.461677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.461702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.461805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.461831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.461963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.461988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.462085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.462111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.462207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.462233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.462369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.462394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.462485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.462511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.462629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.462655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.462748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.462774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.462898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.462924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.463134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.463161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.463263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.463289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.463413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.463438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.463535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.463560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.463667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.463692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.463788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.463814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.463907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.463933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.464034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.464066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.464172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.464198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.464303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.464333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.464431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.464456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.464575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.464605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.464704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.464730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.464826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.464852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.464948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.464975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.465070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.465096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.465192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.465218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.465322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.465348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.465466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.465491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.465590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.465616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.465716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.899 [2024-07-25 19:07:32.465741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.899 qpair failed and we were unable to recover it. 00:34:20.899 [2024-07-25 19:07:32.465841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.465867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.465962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.465988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.466087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.466113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.466244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.466270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.466414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.466439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.466535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.466561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.466688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.466714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.466809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.466836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.466928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.466954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.467049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.467081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.467180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.467206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.467337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.467363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.467463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.467489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.467585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.467610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.467731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.467757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.467857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.467883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.467977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.468003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.468141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.468171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 Malloc0 00:34:20.900 [2024-07-25 19:07:32.468272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.468299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.468399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.468425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.468530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.468555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.900 [2024-07-25 19:07:32.468658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.468683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:20.900 [2024-07-25 19:07:32.468804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.468829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.468922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.900 [2024-07-25 19:07:32.468947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.900 [2024-07-25 19:07:32.469047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.469079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.469177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.469202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.469302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.469328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.469430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.469456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.469548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.469574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.469678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.469707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.469829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.469855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.469940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.469965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.470072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.900 [2024-07-25 19:07:32.470110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.900 qpair failed and we were unable to recover it. 00:34:20.900 [2024-07-25 19:07:32.470215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.470241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.470331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.470356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.470450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.470475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.470575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.470600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.470701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.470726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.470855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.470881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.471082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.471117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.471223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.471249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.471354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.471381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.471485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.471511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.471607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.471633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.471765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.471791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.471878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.471904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.471967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.901 [2024-07-25 19:07:32.471997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.472022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.472163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.472190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.472287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.472313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.472466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.472491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.472583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.472609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.472708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.472733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.472830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.472855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.472951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.472977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.473111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.473151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.473263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.473290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.473509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.473541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.473670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.473696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.473822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.473848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.473944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.473970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.901 qpair failed and we were unable to recover it. 00:34:20.901 [2024-07-25 19:07:32.474076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.901 [2024-07-25 19:07:32.474113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.474217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.474243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.474450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.474478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.474572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.474598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.474704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.474731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.474936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.474962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.475063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.475090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.475198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.475223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.475328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.475353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.475474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.475499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.475602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.475628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.475756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.475781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.475880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.475914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.476005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.476030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.476182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.476210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.476344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.476369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.476473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.476499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.476589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.476615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.476825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.476851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.476983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.477010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.477115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.477142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.477251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.477277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.477403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.477428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f0000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 A controller has encountered a failure and is being reset. 00:34:20.902 [2024-07-25 19:07:32.477563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.477599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.477704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.477731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.477836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.477864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.478024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.478050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.478163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.478189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.478290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.478321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.478414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.478439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.478572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.478597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.478703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.478729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.478851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.902 [2024-07-25 19:07:32.478877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.902 qpair failed and we were unable to recover it. 00:34:20.902 [2024-07-25 19:07:32.478980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.479006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.479142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.479168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.479266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.479292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.479410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.479442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.479541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.479567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.479669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.479695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.479792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.479817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.479918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.479944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.480039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.480071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.480165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.480190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.903 [2024-07-25 19:07:32.480325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.480351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.480441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.480466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.903 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.480568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.903 [2024-07-25 19:07:32.480597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.480708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.480734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.480834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.480861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.480980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.481011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.481108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.481135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.481261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.481288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.481419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.481445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.481543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.481569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.481661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.481687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.481792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.481819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.481907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.481933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.482035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.482069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.482173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.482199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.482302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.482328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.482427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.482453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.482542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.482568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.482663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.482691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.482788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.482814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.482935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.903 [2024-07-25 19:07:32.482961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.903 qpair failed and we were unable to recover it. 00:34:20.903 [2024-07-25 19:07:32.483066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.483092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.483198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.483223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.483334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.483360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.483481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.483507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.483609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.483636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.483740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.483768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.483866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.483891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.483993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.484019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.484137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.484163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.484254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.484280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.484393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.484419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.484516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.484542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.484639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.484664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.484756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.484782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.484892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.484918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.485016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.485041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.485160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.485185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.485284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.485310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.485406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.485432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.485558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.485583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.485678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.485704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.485790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.485816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.485920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.485945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.486039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.486069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.486169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.486195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.486328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.486354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.486448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.486473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.486578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.486603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.486694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.486720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.486813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.486838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.486962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.486987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.487078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.487104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.487203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.487228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.487315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.487340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.487436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.904 [2024-07-25 19:07:32.487461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.904 qpair failed and we were unable to recover it. 00:34:20.904 [2024-07-25 19:07:32.487549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.487574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.487671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.487696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.487818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.487843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.488003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.488032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.488172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.488198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.905 [2024-07-25 19:07:32.488293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.488319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:20.905 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.905 [2024-07-25 19:07:32.488524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.488550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.905 [2024-07-25 19:07:32.488639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.488665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.488752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.488777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.488916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.488941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.489149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.489176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.489282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.489307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.489429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.489455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.489552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.489577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.489685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.489710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.489840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.489866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.489957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.489982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.490090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.490116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.490213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.490239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.490339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.490365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.490494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.490519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.490640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.490665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.490755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.490780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.490871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.490897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.491022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.491047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.491150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.491176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.491277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.491302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.491431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.491457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.491545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.491571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.491660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.491686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.491789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.491815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.491909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.491934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.492026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.492052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.492159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.905 [2024-07-25 19:07:32.492184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.905 qpair failed and we were unable to recover it. 00:34:20.905 [2024-07-25 19:07:32.492277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.492302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.492403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.492429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.492542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.492568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.492661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.492687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.492810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.492836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.492961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.492986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.493086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.493112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.493237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.493264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.493374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.493403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.493508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.493534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.493635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.493660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.493786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.493811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.493918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.493943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.494038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.494070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.494180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.494206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.494335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.494360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.494462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.494487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.494586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.494612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.494736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.494761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.494884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.494910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.495015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.495054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.495175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.495202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.495307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.495333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.495431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.495456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.495563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.495589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.495708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.495734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.495831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.495857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.495986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.496012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f8000b90 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.496142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.496170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.496266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.496292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 [2024-07-25 19:07:32.496395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.906 [2024-07-25 19:07:32.496421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.906 qpair failed and we were unable to recover it. 00:34:20.906 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.907 [2024-07-25 19:07:32.496519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.496549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.907 [2024-07-25 19:07:32.496646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.496672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.496775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.496801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.496900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.496925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.497031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.497065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.497201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.497227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.497329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.497355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.497450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.497476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.497575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.497601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.497703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.497729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.497850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.497875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.497980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.498006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.498104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.498136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.498226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.498252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.498366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.498391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.498489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.498514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.498616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.498642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.498739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.498764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.498892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.498919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.499012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.499038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.499207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.499232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.499321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.499346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.499472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.499497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.499591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.499617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.499746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.499772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.499903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.499929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.500056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.907 [2024-07-25 19:07:32.500096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x795570 with addr=10.0.0.2, port=4420 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 [2024-07-25 19:07:32.500251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.907 [2024-07-25 19:07:32.502675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.907 [2024-07-25 19:07:32.502814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.907 [2024-07-25 19:07:32.502842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.907 [2024-07-25 19:07:32.502858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.907 [2024-07-25 19:07:32.502871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.907 [2024-07-25 19:07:32.502910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.907 qpair failed and we were unable to recover it. 00:34:20.907 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.907 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:20.907 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.907 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.907 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.907 19:07:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3692168 00:34:20.907 [2024-07-25 19:07:32.512563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.907 [2024-07-25 19:07:32.512657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.907 [2024-07-25 19:07:32.512684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.907 [2024-07-25 19:07:32.512699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.512712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.512740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.522595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.522754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.522781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.522796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.522809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.522837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.532642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.532750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.532776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.532791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.532804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.532832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.542588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.542691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.542717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.542738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.542752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.542781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.552594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.552714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.552740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.552755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.552768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.552796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.562636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.562749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.562775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.562790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.562803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.562831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.572605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.572707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.572732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.572746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.572759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.572787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.582618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.582748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.582774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.582788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.582801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.582829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.592651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.592788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.592814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.592829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.592842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.592870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.602661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.602755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.602781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.602795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.602808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.602836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.612756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.612900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.612925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.612940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.612953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.612981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.622750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.622851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.622877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.622891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.622905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.622933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.632774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.632874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.908 [2024-07-25 19:07:32.632904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.908 [2024-07-25 19:07:32.632920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.908 [2024-07-25 19:07:32.632933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.908 [2024-07-25 19:07:32.632961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.908 qpair failed and we were unable to recover it. 00:34:20.908 [2024-07-25 19:07:32.642791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.908 [2024-07-25 19:07:32.642886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.642912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.642927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.642940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.642967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.652792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.652899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.652925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.652939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.652952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.652980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.662856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.662955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.662981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.662996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.663009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.663037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.672918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.673031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.673066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.673086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.673100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.673130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.682907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.683003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.683029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.683044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.683057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.683096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.692950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.693051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.693082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.693097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.693111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.693139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.702964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.703057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.703092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.703106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.703119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.703147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.713033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.713181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.713206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.713221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.713235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.713262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.723015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.723115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.723146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.723162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.723175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.723204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.733038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.733150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.733176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.733190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.733204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.733232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:20.909 [2024-07-25 19:07:32.743099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:20.909 [2024-07-25 19:07:32.743196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:20.909 [2024-07-25 19:07:32.743222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:20.909 [2024-07-25 19:07:32.743237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.909 [2024-07-25 19:07:32.743250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:20.909 [2024-07-25 19:07:32.743277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:20.909 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.753131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.753245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.753271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.753286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.753299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.753326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.763143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.763249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.763274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.763288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.763301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.763335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.773146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.773257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.773282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.773296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.773309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.773337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.783198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.783299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.783324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.783338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.783351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.783379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.793244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.793339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.793365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.793379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.793392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.793420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.803251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.803353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.803378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.803393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.803407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.803434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.813281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.813386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.813418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.813433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.813446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.813474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.823285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.823381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.823406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.823421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.823434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.823462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.833359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.833486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.169 [2024-07-25 19:07:32.833510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.169 [2024-07-25 19:07:32.833525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.169 [2024-07-25 19:07:32.833539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.169 [2024-07-25 19:07:32.833567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.169 qpair failed and we were unable to recover it. 00:34:21.169 [2024-07-25 19:07:32.843348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.169 [2024-07-25 19:07:32.843457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.843482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.843496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.843509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.843537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.853442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.853595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.853621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.853635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.853648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.853681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.863483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.863595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.863620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.863634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.863647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.863675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.873468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.873577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.873602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.873617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.873630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.873657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.883522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.883615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.883640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.883654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.883667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.883695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.893543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.893646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.893671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.893686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.893699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.893727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.903595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.903710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.903744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.903759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.903772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.903801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.913592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.913690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.913715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.913730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.913742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.913770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.923685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.923800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.923825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.923840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.923853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.923881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.933665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.933765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.933791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.933805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.933818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.933846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.943676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.943772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.943797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.170 [2024-07-25 19:07:32.943811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.170 [2024-07-25 19:07:32.943829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.170 [2024-07-25 19:07:32.943858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.170 qpair failed and we were unable to recover it. 00:34:21.170 [2024-07-25 19:07:32.953706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.170 [2024-07-25 19:07:32.953804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.170 [2024-07-25 19:07:32.953829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:32.953843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:32.953857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:32.953884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:32.963761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:32.963862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:32.963887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:32.963901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:32.963915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:32.963943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:32.973780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:32.973884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:32.973909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:32.973923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:32.973937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:32.973967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:32.983816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:32.983926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:32.983951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:32.983966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:32.983979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:32.984006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:32.993826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:32.993924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:32.993954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:32.993970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:32.993983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:32.994011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:33.003894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:33.003994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:33.004019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:33.004034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:33.004047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:33.004081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:33.013894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:33.014002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:33.014027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:33.014041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:33.014054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:33.014093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:33.023927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:33.024027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:33.024053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:33.024076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:33.024090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:33.024119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:33.033950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:33.034044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:33.034079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:33.034095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:33.034113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:33.034142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.171 [2024-07-25 19:07:33.044021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.171 [2024-07-25 19:07:33.044162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.171 [2024-07-25 19:07:33.044187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.171 [2024-07-25 19:07:33.044202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.171 [2024-07-25 19:07:33.044215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.171 [2024-07-25 19:07:33.044244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.171 qpair failed and we were unable to recover it. 00:34:21.430 [2024-07-25 19:07:33.054023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.430 [2024-07-25 19:07:33.054145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.430 [2024-07-25 19:07:33.054173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.430 [2024-07-25 19:07:33.054188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.430 [2024-07-25 19:07:33.054201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.430 [2024-07-25 19:07:33.054231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.430 qpair failed and we were unable to recover it. 00:34:21.430 [2024-07-25 19:07:33.064017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.430 [2024-07-25 19:07:33.064131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.430 [2024-07-25 19:07:33.064157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.430 [2024-07-25 19:07:33.064171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.430 [2024-07-25 19:07:33.064184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.430 [2024-07-25 19:07:33.064212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.430 qpair failed and we were unable to recover it. 00:34:21.430 [2024-07-25 19:07:33.074112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.430 [2024-07-25 19:07:33.074226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.430 [2024-07-25 19:07:33.074250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.430 [2024-07-25 19:07:33.074264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.430 [2024-07-25 19:07:33.074278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.430 [2024-07-25 19:07:33.074306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.430 qpair failed and we were unable to recover it. 00:34:21.430 [2024-07-25 19:07:33.084081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.430 [2024-07-25 19:07:33.084203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.430 [2024-07-25 19:07:33.084228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.430 [2024-07-25 19:07:33.084242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.430 [2024-07-25 19:07:33.084255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.430 [2024-07-25 19:07:33.084282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.430 qpair failed and we were unable to recover it. 00:34:21.430 [2024-07-25 19:07:33.094122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.430 [2024-07-25 19:07:33.094284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.430 [2024-07-25 19:07:33.094309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.430 [2024-07-25 19:07:33.094323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.430 [2024-07-25 19:07:33.094337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.430 [2024-07-25 19:07:33.094365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.430 qpair failed and we were unable to recover it. 00:34:21.430 [2024-07-25 19:07:33.104144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.430 [2024-07-25 19:07:33.104271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.430 [2024-07-25 19:07:33.104296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.430 [2024-07-25 19:07:33.104310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.430 [2024-07-25 19:07:33.104323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.104351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.114176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.114305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.114332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.114346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.114363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.114393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.124201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.124294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.124319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.124334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.124352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.124381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.134241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.134345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.134368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.134382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.134394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.134422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.144245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.144343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.144369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.144383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.144396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.144424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.154282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.154378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.154404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.154418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.154431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.154461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.164315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.164409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.164434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.164448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.164462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.164490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.174389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.174499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.174525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.174540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.174553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.174581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.184352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.184446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.184471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.184485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.184498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.184526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.194386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.194478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.194503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.194517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.194530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.194558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.204424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.204532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.204557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.204571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.431 [2024-07-25 19:07:33.204584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.431 [2024-07-25 19:07:33.204614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.431 qpair failed and we were unable to recover it. 00:34:21.431 [2024-07-25 19:07:33.214472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.431 [2024-07-25 19:07:33.214579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.431 [2024-07-25 19:07:33.214604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.431 [2024-07-25 19:07:33.214619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.214638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.214666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.224486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.224590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.224615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.224630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.224643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.224671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.234537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.234632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.234658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.234673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.234686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.234713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.244574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.244701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.244726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.244741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.244754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.244781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.254611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.254728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.254753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.254768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.254781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.254809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.264592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.264689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.264714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.264729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.264741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.264769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.274613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.274725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.274751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.274765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.274778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.274806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.284648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.284765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.284790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.284805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.284818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.284846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.294738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.294841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.294866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.294881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.294894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.294922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.432 [2024-07-25 19:07:33.304731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.432 [2024-07-25 19:07:33.304842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.432 [2024-07-25 19:07:33.304867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.432 [2024-07-25 19:07:33.304888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.432 [2024-07-25 19:07:33.304902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.432 [2024-07-25 19:07:33.304930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.432 qpair failed and we were unable to recover it. 00:34:21.689 [2024-07-25 19:07:33.314844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.689 [2024-07-25 19:07:33.314963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.689 [2024-07-25 19:07:33.314989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.689 [2024-07-25 19:07:33.315004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.689 [2024-07-25 19:07:33.315017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.689 [2024-07-25 19:07:33.315044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.324812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.324911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.324936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.324950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.324963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.324991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.334914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.335021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.335047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.335068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.335083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.335111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.344902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.345018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.345044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.345064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.345080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.345108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.354841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.354964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.354990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.355004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.355017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.355045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.364888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.364990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.365015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.365029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.365042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.365077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.374944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.375068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.375094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.375108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.375121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.375149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.384948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.385083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.385109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.385124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.385137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.385165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.394980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.395085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.395111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.395131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.395146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.395174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.405003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.405105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.405131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.405146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.405159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.405187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.415092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.415192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.415218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.415232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.415245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.415272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.425043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.425170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.425195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.425209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.425223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.425250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.435065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.435162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.435187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.435202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.435215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.435243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.445128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.445229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.445257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.445273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.445286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.445314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.455175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.455280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.455305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.455320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.455333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.455360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.465173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.465275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.465301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.465315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.465328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.465358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.475206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.475298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.475324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.475339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.475352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.475380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.485246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.485358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.485383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.485403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.485417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.485445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.495262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.495398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.495423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.495437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.495451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.495478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.505307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.505428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.505453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.505468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.505480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.505509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.515309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.515405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.515430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.515445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.515458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.690 [2024-07-25 19:07:33.515485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.690 qpair failed and we were unable to recover it. 00:34:21.690 [2024-07-25 19:07:33.525322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.690 [2024-07-25 19:07:33.525416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.690 [2024-07-25 19:07:33.525441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.690 [2024-07-25 19:07:33.525455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.690 [2024-07-25 19:07:33.525469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.691 [2024-07-25 19:07:33.525496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.691 qpair failed and we were unable to recover it. 00:34:21.691 [2024-07-25 19:07:33.535363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.691 [2024-07-25 19:07:33.535461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.691 [2024-07-25 19:07:33.535486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.691 [2024-07-25 19:07:33.535500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.691 [2024-07-25 19:07:33.535513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.691 [2024-07-25 19:07:33.535541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.691 qpair failed and we were unable to recover it. 00:34:21.691 [2024-07-25 19:07:33.545381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.691 [2024-07-25 19:07:33.545505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.691 [2024-07-25 19:07:33.545531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.691 [2024-07-25 19:07:33.545545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.691 [2024-07-25 19:07:33.545558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.691 [2024-07-25 19:07:33.545586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.691 qpair failed and we were unable to recover it. 00:34:21.691 [2024-07-25 19:07:33.555408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.691 [2024-07-25 19:07:33.555505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.691 [2024-07-25 19:07:33.555530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.691 [2024-07-25 19:07:33.555545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.691 [2024-07-25 19:07:33.555557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.691 [2024-07-25 19:07:33.555585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.691 qpair failed and we were unable to recover it. 00:34:21.691 [2024-07-25 19:07:33.565427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.691 [2024-07-25 19:07:33.565537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.691 [2024-07-25 19:07:33.565562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.691 [2024-07-25 19:07:33.565576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.691 [2024-07-25 19:07:33.565589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.691 [2024-07-25 19:07:33.565619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.691 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 19:07:33.575492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.949 [2024-07-25 19:07:33.575595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.949 [2024-07-25 19:07:33.575626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.949 [2024-07-25 19:07:33.575641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.949 [2024-07-25 19:07:33.575655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.949 [2024-07-25 19:07:33.575683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 19:07:33.585478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.949 [2024-07-25 19:07:33.585580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.949 [2024-07-25 19:07:33.585606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.949 [2024-07-25 19:07:33.585621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.949 [2024-07-25 19:07:33.585635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.949 [2024-07-25 19:07:33.585662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 19:07:33.595518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.949 [2024-07-25 19:07:33.595613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.949 [2024-07-25 19:07:33.595638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.949 [2024-07-25 19:07:33.595652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.949 [2024-07-25 19:07:33.595665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.949 [2024-07-25 19:07:33.595693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 19:07:33.605540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.949 [2024-07-25 19:07:33.605680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.949 [2024-07-25 19:07:33.605705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.949 [2024-07-25 19:07:33.605720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.949 [2024-07-25 19:07:33.605733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.949 [2024-07-25 19:07:33.605760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-25 19:07:33.615593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.949 [2024-07-25 19:07:33.615699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.949 [2024-07-25 19:07:33.615731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.949 [2024-07-25 19:07:33.615745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.949 [2024-07-25 19:07:33.615758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.615786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.625670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.625764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.625790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.625804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.625817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.625847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.635660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.635758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.635785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.635800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.635813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.635841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.645684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.645792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.645820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.645835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.645848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.645876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.655737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.655854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.655879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.655893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.655906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.655934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.665793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.665891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.665924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.665940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.665953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.665980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.675766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.675864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.675890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.675904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.675918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.675945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.685763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.685859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.685885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.685899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.685912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.685940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.695839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.695939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.695964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.695978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.695991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.696018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.705824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.705957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.705983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.705997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.706010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.706043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.715863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.715998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.716024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.716038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.716051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.716085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.725928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.726022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.726048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.726074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.950 [2024-07-25 19:07:33.726090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.950 [2024-07-25 19:07:33.726119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-25 19:07:33.736000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.950 [2024-07-25 19:07:33.736146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.950 [2024-07-25 19:07:33.736172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.950 [2024-07-25 19:07:33.736187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.736200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.736230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 19:07:33.745956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.951 [2024-07-25 19:07:33.746053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.951 [2024-07-25 19:07:33.746084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.951 [2024-07-25 19:07:33.746099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.746112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.746140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 19:07:33.755987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.951 [2024-07-25 19:07:33.756089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.951 [2024-07-25 19:07:33.756119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.951 [2024-07-25 19:07:33.756134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.756148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.756176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 19:07:33.766008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.951 [2024-07-25 19:07:33.766102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.951 [2024-07-25 19:07:33.766128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.951 [2024-07-25 19:07:33.766142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.766155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.766183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 19:07:33.776037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.951 [2024-07-25 19:07:33.776145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.951 [2024-07-25 19:07:33.776171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.951 [2024-07-25 19:07:33.776185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.776198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.776226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 19:07:33.786074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.951 [2024-07-25 19:07:33.786174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.951 [2024-07-25 19:07:33.786199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.951 [2024-07-25 19:07:33.786213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.786226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.786254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 19:07:33.796111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.951 [2024-07-25 19:07:33.796202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.951 [2024-07-25 19:07:33.796228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.951 [2024-07-25 19:07:33.796242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.796255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.796289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 19:07:33.806120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.951 [2024-07-25 19:07:33.806243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.951 [2024-07-25 19:07:33.806268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.951 [2024-07-25 19:07:33.806282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.806295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.806323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-25 19:07:33.816156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:21.951 [2024-07-25 19:07:33.816277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:21.951 [2024-07-25 19:07:33.816303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:21.951 [2024-07-25 19:07:33.816317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:21.951 [2024-07-25 19:07:33.816330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:21.951 [2024-07-25 19:07:33.816357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.951 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.826240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.826370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.826395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.826409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.826423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.826450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.836232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.836349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.836375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.836389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.836402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.836430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.846215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.846320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.846350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.846365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.846379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.846407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.856265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.856397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.856422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.856436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.856448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.856476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.866271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.866366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.866390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.866405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.866418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.866446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.876333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.876444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.876469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.876484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.876497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.876525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.886338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.886442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.886468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.886482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.886495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.886530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.896393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.896512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.896538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.896552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.896565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.896593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.906450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.906552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.906578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.906592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.906606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.212 [2024-07-25 19:07:33.906634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.212 qpair failed and we were unable to recover it. 00:34:22.212 [2024-07-25 19:07:33.916435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.212 [2024-07-25 19:07:33.916558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.212 [2024-07-25 19:07:33.916584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.212 [2024-07-25 19:07:33.916598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.212 [2024-07-25 19:07:33.916611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.916639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:33.926514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:33.926615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:33.926640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:33.926655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:33.926668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.926696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:33.936508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:33.936609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:33.936639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:33.936655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:33.936668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.936696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:33.946540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:33.946641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:33.946666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:33.946681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:33.946694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.946722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:33.956568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:33.956663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:33.956688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:33.956703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:33.956716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.956744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:33.966605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:33.966705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:33.966730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:33.966745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:33.966759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.966787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:33.976603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:33.976703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:33.976728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:33.976742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:33.976760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.976789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:33.986668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:33.986797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:33.986822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:33.986836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:33.986849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.986877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:33.996721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:33.996825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:33.996851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:33.996865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:33.996878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:33.996906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:34.006691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:34.006818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:34.006844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:34.006858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:34.006871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:34.006899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:34.016759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:34.016865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:34.016890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:34.016905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:34.016918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:34.016945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:34.026732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:34.026833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:34.026858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:34.026872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:34.026886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:34.026914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:34.036766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:34.036897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:34.036923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:34.036937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:34.036950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:34.036979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.213 qpair failed and we were unable to recover it. 00:34:22.213 [2024-07-25 19:07:34.046790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.213 [2024-07-25 19:07:34.046885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.213 [2024-07-25 19:07:34.046911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.213 [2024-07-25 19:07:34.046925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.213 [2024-07-25 19:07:34.046938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.213 [2024-07-25 19:07:34.046965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.214 qpair failed and we were unable to recover it. 00:34:22.214 [2024-07-25 19:07:34.056882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.214 [2024-07-25 19:07:34.056987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.214 [2024-07-25 19:07:34.057012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.214 [2024-07-25 19:07:34.057027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.214 [2024-07-25 19:07:34.057040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.214 [2024-07-25 19:07:34.057076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.214 qpair failed and we were unable to recover it. 00:34:22.214 [2024-07-25 19:07:34.066856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.214 [2024-07-25 19:07:34.066955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.214 [2024-07-25 19:07:34.066981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.214 [2024-07-25 19:07:34.066995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.214 [2024-07-25 19:07:34.067013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.214 [2024-07-25 19:07:34.067044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.214 qpair failed and we were unable to recover it. 00:34:22.214 [2024-07-25 19:07:34.076913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.214 [2024-07-25 19:07:34.077019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.214 [2024-07-25 19:07:34.077044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.214 [2024-07-25 19:07:34.077065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.214 [2024-07-25 19:07:34.077080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.214 [2024-07-25 19:07:34.077109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.214 qpair failed and we were unable to recover it. 00:34:22.214 [2024-07-25 19:07:34.086897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.214 [2024-07-25 19:07:34.086992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.214 [2024-07-25 19:07:34.087018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.214 [2024-07-25 19:07:34.087032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.214 [2024-07-25 19:07:34.087044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.214 [2024-07-25 19:07:34.087077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.214 qpair failed and we were unable to recover it. 00:34:22.475 [2024-07-25 19:07:34.096944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.475 [2024-07-25 19:07:34.097064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.475 [2024-07-25 19:07:34.097090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.475 [2024-07-25 19:07:34.097104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.475 [2024-07-25 19:07:34.097117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.475 [2024-07-25 19:07:34.097146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.475 qpair failed and we were unable to recover it. 00:34:22.475 [2024-07-25 19:07:34.106946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.475 [2024-07-25 19:07:34.107045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.475 [2024-07-25 19:07:34.107077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.475 [2024-07-25 19:07:34.107092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.475 [2024-07-25 19:07:34.107105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.475 [2024-07-25 19:07:34.107135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.475 qpair failed and we were unable to recover it. 00:34:22.475 [2024-07-25 19:07:34.117009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.475 [2024-07-25 19:07:34.117128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.475 [2024-07-25 19:07:34.117153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.475 [2024-07-25 19:07:34.117168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.475 [2024-07-25 19:07:34.117181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.475 [2024-07-25 19:07:34.117208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.475 qpair failed and we were unable to recover it. 00:34:22.475 [2024-07-25 19:07:34.126998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.475 [2024-07-25 19:07:34.127091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.475 [2024-07-25 19:07:34.127116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.475 [2024-07-25 19:07:34.127131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.475 [2024-07-25 19:07:34.127144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.475 [2024-07-25 19:07:34.127172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.475 qpair failed and we were unable to recover it. 00:34:22.475 [2024-07-25 19:07:34.137042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.137148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.137173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.137186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.137198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.137225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.147096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.147220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.147246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.147261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.147274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.147302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.157116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.157216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.157242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.157256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.157274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.157304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.167147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.167274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.167299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.167313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.167326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.167354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.177177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.177320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.177345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.177360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.177373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.177400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.187215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.187333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.187361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.187377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.187390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.187419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.197226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.197326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.197352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.197366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.197379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.197407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.207246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.207345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.207371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.207385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.207398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.207426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.217273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.217374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.217400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.217414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.217427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.217455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.227307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.227406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.227433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.227451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.227464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.227493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.237358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.237467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.237493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.237507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.237521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.237548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.247350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.247442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.247467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.247487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.247501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.247529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.257402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.257534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.476 [2024-07-25 19:07:34.257559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.476 [2024-07-25 19:07:34.257573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.476 [2024-07-25 19:07:34.257586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.476 [2024-07-25 19:07:34.257614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.476 qpair failed and we were unable to recover it. 00:34:22.476 [2024-07-25 19:07:34.267425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.476 [2024-07-25 19:07:34.267520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.267545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.267559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.267573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.267600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.477 [2024-07-25 19:07:34.277470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.477 [2024-07-25 19:07:34.277598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.277623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.277637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.277650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.277678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.477 [2024-07-25 19:07:34.287478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.477 [2024-07-25 19:07:34.287616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.287641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.287655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.287668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.287695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.477 [2024-07-25 19:07:34.297530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.477 [2024-07-25 19:07:34.297627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.297652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.297666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.297678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.297706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.477 [2024-07-25 19:07:34.307537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.477 [2024-07-25 19:07:34.307661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.307687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.307701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.307714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.307741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.477 [2024-07-25 19:07:34.317613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.477 [2024-07-25 19:07:34.317728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.317753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.317768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.317781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.317809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.477 [2024-07-25 19:07:34.327618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.477 [2024-07-25 19:07:34.327716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.327741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.327755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.327768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.327796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.477 [2024-07-25 19:07:34.337640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.477 [2024-07-25 19:07:34.337743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.337768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.337788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.337801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.337830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.477 [2024-07-25 19:07:34.347680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.477 [2024-07-25 19:07:34.347777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.477 [2024-07-25 19:07:34.347803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.477 [2024-07-25 19:07:34.347817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.477 [2024-07-25 19:07:34.347830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.477 [2024-07-25 19:07:34.347858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.477 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.357700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.357812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.357838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.357853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.357866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.357895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.367696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.367829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.367854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.367869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.367882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.367909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.377750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.377859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.377888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.377904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.377917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.377945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.387764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.387867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.387893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.387908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.387920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.387948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.397790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.397890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.397916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.397931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.397944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.397972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.407834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.407931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.407957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.407971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.407985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.408012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.417862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.417965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.417990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.418004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.418017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.418045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.427875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.427974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.427999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.428023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.428037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.428074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.437917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.438024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.438050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.438074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.438088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.438116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.447933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.448034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.448066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.448083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.448097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.448127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.457971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.458080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.458105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.458120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.458133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.739 [2024-07-25 19:07:34.458161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-25 19:07:34.467970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.739 [2024-07-25 19:07:34.468113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.739 [2024-07-25 19:07:34.468138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.739 [2024-07-25 19:07:34.468152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.739 [2024-07-25 19:07:34.468165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.468194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.478017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.478119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.478144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.478158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.478171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.478199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.488034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.488137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.488163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.488178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.488191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.488218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.498091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.498204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.498230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.498244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.498257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.498285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.508092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.508191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.508217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.508231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.508244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.508272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.518151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.518271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.518301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.518316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.518330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.518357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.528150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.528244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.528268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.528282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.528296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.528323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.538214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.538319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.538344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.538359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.538372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.538400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.548249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.548347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.548373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.548388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.548401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.548428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.558288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.558399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.558424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.558438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.558451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.558479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.568272] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.568369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.568395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.568409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.568422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.568450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.578342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.578445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.578471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.578485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.578498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.578525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.588343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.588446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.588472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.588486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.588499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.588527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-25 19:07:34.598358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.740 [2024-07-25 19:07:34.598450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.740 [2024-07-25 19:07:34.598475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.740 [2024-07-25 19:07:34.598489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.740 [2024-07-25 19:07:34.598502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.740 [2024-07-25 19:07:34.598529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-25 19:07:34.608390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.741 [2024-07-25 19:07:34.608482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.741 [2024-07-25 19:07:34.608512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.741 [2024-07-25 19:07:34.608527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.741 [2024-07-25 19:07:34.608540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:22.741 [2024-07-25 19:07:34.608568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.741 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.618545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.618642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.618667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.618681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.618694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.618722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.628488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.628616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.628641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.628655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.628668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.628696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.638529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.638631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.638656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.638670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.638683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.638711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.648496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.648592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.648618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.648633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.648647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.648680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.658563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.658666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.658691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.658706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.658719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.658747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.668602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.668715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.668740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.668754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.668768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.668795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.678608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.678704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.678730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.678744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.678757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.678785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.688649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.688776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.688801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.688815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.688829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.688856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.698678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.698784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.698815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.698830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.698843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.698871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.708674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.708774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.708799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.002 [2024-07-25 19:07:34.708814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.002 [2024-07-25 19:07:34.708827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.002 [2024-07-25 19:07:34.708855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.002 qpair failed and we were unable to recover it. 00:34:23.002 [2024-07-25 19:07:34.718729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.002 [2024-07-25 19:07:34.718846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.002 [2024-07-25 19:07:34.718875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.718891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.718904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.718933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.728771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.728905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.728931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.728946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.728959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.728987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.738817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.738921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.738946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.738961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.738974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.739008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.748807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.748909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.748934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.748949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.748961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.748989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.758942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.759040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.759073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.759089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.759102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.759130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.768854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.768976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.769002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.769016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.769029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.769057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.778908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.779004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.779029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.779044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.779057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.779097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.788895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.789000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.789030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.789045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.789067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.789098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.798924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.799039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.799071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.799091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.799105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.799133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.808959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.809074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.809100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.809114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.809127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.809155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.819065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.819169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.819194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.819208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.819221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.819250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.829029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.829133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.829158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.829173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.829186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.829220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.839048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.839148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.839173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.003 [2024-07-25 19:07:34.839187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.003 [2024-07-25 19:07:34.839200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.003 [2024-07-25 19:07:34.839228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.003 qpair failed and we were unable to recover it. 00:34:23.003 [2024-07-25 19:07:34.849099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.003 [2024-07-25 19:07:34.849201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.003 [2024-07-25 19:07:34.849226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.004 [2024-07-25 19:07:34.849240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.004 [2024-07-25 19:07:34.849253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.004 [2024-07-25 19:07:34.849281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.004 qpair failed and we were unable to recover it. 00:34:23.004 [2024-07-25 19:07:34.859201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.004 [2024-07-25 19:07:34.859302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.004 [2024-07-25 19:07:34.859327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.004 [2024-07-25 19:07:34.859341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.004 [2024-07-25 19:07:34.859354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.004 [2024-07-25 19:07:34.859382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.004 qpair failed and we were unable to recover it. 00:34:23.004 [2024-07-25 19:07:34.869213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.004 [2024-07-25 19:07:34.869328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.004 [2024-07-25 19:07:34.869356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.004 [2024-07-25 19:07:34.869371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.004 [2024-07-25 19:07:34.869384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.004 [2024-07-25 19:07:34.869412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.004 qpair failed and we were unable to recover it. 00:34:23.264 [2024-07-25 19:07:34.879198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.264 [2024-07-25 19:07:34.879293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.264 [2024-07-25 19:07:34.879323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.264 [2024-07-25 19:07:34.879339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.264 [2024-07-25 19:07:34.879352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.264 [2024-07-25 19:07:34.879380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.264 qpair failed and we were unable to recover it. 00:34:23.264 [2024-07-25 19:07:34.889228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.264 [2024-07-25 19:07:34.889325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.264 [2024-07-25 19:07:34.889351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.264 [2024-07-25 19:07:34.889365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.264 [2024-07-25 19:07:34.889379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.264 [2024-07-25 19:07:34.889406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.264 qpair failed and we were unable to recover it. 00:34:23.264 [2024-07-25 19:07:34.899342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.264 [2024-07-25 19:07:34.899475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.264 [2024-07-25 19:07:34.899500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.264 [2024-07-25 19:07:34.899514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.264 [2024-07-25 19:07:34.899527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.264 [2024-07-25 19:07:34.899555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.264 qpair failed and we were unable to recover it. 00:34:23.264 [2024-07-25 19:07:34.909301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.264 [2024-07-25 19:07:34.909427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.264 [2024-07-25 19:07:34.909452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.264 [2024-07-25 19:07:34.909466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.264 [2024-07-25 19:07:34.909480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.264 [2024-07-25 19:07:34.909507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.264 qpair failed and we were unable to recover it. 00:34:23.264 [2024-07-25 19:07:34.919308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.264 [2024-07-25 19:07:34.919408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.264 [2024-07-25 19:07:34.919434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.264 [2024-07-25 19:07:34.919448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.264 [2024-07-25 19:07:34.919467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.264 [2024-07-25 19:07:34.919495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.264 qpair failed and we were unable to recover it. 00:34:23.264 [2024-07-25 19:07:34.929358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.264 [2024-07-25 19:07:34.929457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.264 [2024-07-25 19:07:34.929482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.264 [2024-07-25 19:07:34.929497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.264 [2024-07-25 19:07:34.929510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.264 [2024-07-25 19:07:34.929538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.264 qpair failed and we were unable to recover it. 00:34:23.264 [2024-07-25 19:07:34.939388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.264 [2024-07-25 19:07:34.939490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:34.939515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:34.939529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:34.939542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:34.939570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:34.949379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:34.949491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:34.949516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:34.949531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:34.949543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:34.949572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:34.959403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:34.959502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:34.959527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:34.959541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:34.959555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:34.959582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:34.969461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:34.969560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:34.969585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:34.969600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:34.969613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:34.969641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:34.979483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:34.979582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:34.979607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:34.979622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:34.979634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:34.979664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:34.989504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:34.989600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:34.989625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:34.989639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:34.989652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:34.989679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:34.999515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:34.999618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:34.999643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:34.999657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:34.999670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:34.999698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:35.009566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:35.009693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:35.009718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:35.009732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:35.009750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:35.009779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:35.019624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:35.019734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:35.019760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:35.019775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:35.019788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:35.019816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:35.029617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:35.029715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:35.029741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:35.029756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:35.029770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:35.029798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:35.039648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:35.039773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:35.039799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:35.039814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:35.039827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:35.039854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:35.049667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:35.049765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:35.049791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:35.049805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:35.049818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:35.049846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:35.059716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:35.059865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:35.059890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:35.059905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.265 [2024-07-25 19:07:35.059918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.265 [2024-07-25 19:07:35.059945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.265 qpair failed and we were unable to recover it. 00:34:23.265 [2024-07-25 19:07:35.069738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.265 [2024-07-25 19:07:35.069850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.265 [2024-07-25 19:07:35.069879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.265 [2024-07-25 19:07:35.069894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.266 [2024-07-25 19:07:35.069907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.266 [2024-07-25 19:07:35.069937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.266 qpair failed and we were unable to recover it. 00:34:23.266 [2024-07-25 19:07:35.079767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.266 [2024-07-25 19:07:35.079878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.266 [2024-07-25 19:07:35.079905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.266 [2024-07-25 19:07:35.079920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.266 [2024-07-25 19:07:35.079936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.266 [2024-07-25 19:07:35.079967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.266 qpair failed and we were unable to recover it. 00:34:23.266 [2024-07-25 19:07:35.089783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.266 [2024-07-25 19:07:35.089878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.266 [2024-07-25 19:07:35.089904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.266 [2024-07-25 19:07:35.089918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.266 [2024-07-25 19:07:35.089931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.266 [2024-07-25 19:07:35.089958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.266 qpair failed and we were unable to recover it. 00:34:23.266 [2024-07-25 19:07:35.099799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.266 [2024-07-25 19:07:35.099931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.266 [2024-07-25 19:07:35.099957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.266 [2024-07-25 19:07:35.099972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.266 [2024-07-25 19:07:35.099990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.266 [2024-07-25 19:07:35.100020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.266 qpair failed and we were unable to recover it. 00:34:23.266 [2024-07-25 19:07:35.109856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.266 [2024-07-25 19:07:35.109952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.266 [2024-07-25 19:07:35.109978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.266 [2024-07-25 19:07:35.109993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.266 [2024-07-25 19:07:35.110006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.266 [2024-07-25 19:07:35.110034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.266 qpair failed and we were unable to recover it. 00:34:23.266 [2024-07-25 19:07:35.119886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.266 [2024-07-25 19:07:35.119979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.266 [2024-07-25 19:07:35.120004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.266 [2024-07-25 19:07:35.120019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.266 [2024-07-25 19:07:35.120032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.266 [2024-07-25 19:07:35.120074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.266 qpair failed and we were unable to recover it. 00:34:23.266 [2024-07-25 19:07:35.129897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.266 [2024-07-25 19:07:35.129993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.266 [2024-07-25 19:07:35.130018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.266 [2024-07-25 19:07:35.130032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.266 [2024-07-25 19:07:35.130045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.266 [2024-07-25 19:07:35.130080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.266 qpair failed and we were unable to recover it. 00:34:23.266 [2024-07-25 19:07:35.139986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.266 [2024-07-25 19:07:35.140114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.266 [2024-07-25 19:07:35.140138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.266 [2024-07-25 19:07:35.140151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.266 [2024-07-25 19:07:35.140163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.266 [2024-07-25 19:07:35.140191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.266 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-25 19:07:35.149968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.525 [2024-07-25 19:07:35.150083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.525 [2024-07-25 19:07:35.150109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.525 [2024-07-25 19:07:35.150123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.525 [2024-07-25 19:07:35.150136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.525 [2024-07-25 19:07:35.150164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-25 19:07:35.159976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.525 [2024-07-25 19:07:35.160101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.525 [2024-07-25 19:07:35.160126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.525 [2024-07-25 19:07:35.160141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.525 [2024-07-25 19:07:35.160153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.525 [2024-07-25 19:07:35.160182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-25 19:07:35.170014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.525 [2024-07-25 19:07:35.170119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.525 [2024-07-25 19:07:35.170145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.525 [2024-07-25 19:07:35.170159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.525 [2024-07-25 19:07:35.170172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.525 [2024-07-25 19:07:35.170200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-25 19:07:35.180055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.525 [2024-07-25 19:07:35.180160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.525 [2024-07-25 19:07:35.180185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.525 [2024-07-25 19:07:35.180199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.525 [2024-07-25 19:07:35.180212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.525 [2024-07-25 19:07:35.180240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-25 19:07:35.190109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.525 [2024-07-25 19:07:35.190200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.525 [2024-07-25 19:07:35.190225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.525 [2024-07-25 19:07:35.190249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.525 [2024-07-25 19:07:35.190263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.525 [2024-07-25 19:07:35.190292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.200113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.200212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.200237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.200251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.200265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.200292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.210176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.210277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.210303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.210317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.210330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.210357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.220166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.220269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.220295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.220309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.220323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.220351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.230203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.230302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.230328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.230342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.230355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.230384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.240218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.240328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.240353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.240368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.240381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.240409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.250290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.250408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.250434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.250449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.250462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.250489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.260318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.260460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.260485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.260500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.260514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.260541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.270317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.270420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.270446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.270460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.270474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.270502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.280378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.280503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.280529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.280549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.280563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.280591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.290392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.290489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.290514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.290528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.290541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.290569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.300421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.300527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.300552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.300566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.300580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.300607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.310428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.310530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.310556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.310570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.310584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.310611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.320529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.320634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.526 [2024-07-25 19:07:35.320659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.526 [2024-07-25 19:07:35.320673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.526 [2024-07-25 19:07:35.320687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.526 [2024-07-25 19:07:35.320714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-25 19:07:35.330551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.526 [2024-07-25 19:07:35.330672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.527 [2024-07-25 19:07:35.330697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.527 [2024-07-25 19:07:35.330712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.527 [2024-07-25 19:07:35.330724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.527 [2024-07-25 19:07:35.330752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-25 19:07:35.340546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.527 [2024-07-25 19:07:35.340651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.527 [2024-07-25 19:07:35.340676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.527 [2024-07-25 19:07:35.340690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.527 [2024-07-25 19:07:35.340703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.527 [2024-07-25 19:07:35.340733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-25 19:07:35.350593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.527 [2024-07-25 19:07:35.350710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.527 [2024-07-25 19:07:35.350735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.527 [2024-07-25 19:07:35.350750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.527 [2024-07-25 19:07:35.350763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.527 [2024-07-25 19:07:35.350790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-25 19:07:35.360642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.527 [2024-07-25 19:07:35.360770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.527 [2024-07-25 19:07:35.360795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.527 [2024-07-25 19:07:35.360809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.527 [2024-07-25 19:07:35.360822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.527 [2024-07-25 19:07:35.360850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-25 19:07:35.370593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.527 [2024-07-25 19:07:35.370688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.527 [2024-07-25 19:07:35.370713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.527 [2024-07-25 19:07:35.370734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.527 [2024-07-25 19:07:35.370747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.527 [2024-07-25 19:07:35.370775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-25 19:07:35.380680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.527 [2024-07-25 19:07:35.380786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.527 [2024-07-25 19:07:35.380811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.527 [2024-07-25 19:07:35.380826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.527 [2024-07-25 19:07:35.380839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.527 [2024-07-25 19:07:35.380866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-25 19:07:35.390715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.527 [2024-07-25 19:07:35.390833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.527 [2024-07-25 19:07:35.390861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.527 [2024-07-25 19:07:35.390875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.527 [2024-07-25 19:07:35.390888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.527 [2024-07-25 19:07:35.390917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-25 19:07:35.400703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.527 [2024-07-25 19:07:35.400814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.527 [2024-07-25 19:07:35.400840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.527 [2024-07-25 19:07:35.400854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.527 [2024-07-25 19:07:35.400867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.527 [2024-07-25 19:07:35.400895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.786 [2024-07-25 19:07:35.410704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.786 [2024-07-25 19:07:35.410794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.786 [2024-07-25 19:07:35.410820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.786 [2024-07-25 19:07:35.410834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.786 [2024-07-25 19:07:35.410847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.786 [2024-07-25 19:07:35.410875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.786 qpair failed and we were unable to recover it. 00:34:23.786 [2024-07-25 19:07:35.420808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.786 [2024-07-25 19:07:35.420951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.786 [2024-07-25 19:07:35.420976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.786 [2024-07-25 19:07:35.420990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.786 [2024-07-25 19:07:35.421003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.786 [2024-07-25 19:07:35.421030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.786 qpair failed and we were unable to recover it. 00:34:23.786 [2024-07-25 19:07:35.430781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.786 [2024-07-25 19:07:35.430882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.786 [2024-07-25 19:07:35.430907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.786 [2024-07-25 19:07:35.430922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.786 [2024-07-25 19:07:35.430935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.786 [2024-07-25 19:07:35.430962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.786 qpair failed and we were unable to recover it. 00:34:23.786 [2024-07-25 19:07:35.440796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.786 [2024-07-25 19:07:35.440906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.786 [2024-07-25 19:07:35.440932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.786 [2024-07-25 19:07:35.440946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.786 [2024-07-25 19:07:35.440958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.786 [2024-07-25 19:07:35.440986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.786 qpair failed and we were unable to recover it. 00:34:23.786 [2024-07-25 19:07:35.450845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.786 [2024-07-25 19:07:35.450942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.786 [2024-07-25 19:07:35.450967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.786 [2024-07-25 19:07:35.450981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.786 [2024-07-25 19:07:35.450995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.786 [2024-07-25 19:07:35.451022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.786 qpair failed and we were unable to recover it. 00:34:23.786 [2024-07-25 19:07:35.460893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.786 [2024-07-25 19:07:35.461004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.786 [2024-07-25 19:07:35.461034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.786 [2024-07-25 19:07:35.461049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.786 [2024-07-25 19:07:35.461070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.786 [2024-07-25 19:07:35.461099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.786 qpair failed and we were unable to recover it. 00:34:23.786 [2024-07-25 19:07:35.470884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.786 [2024-07-25 19:07:35.470984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.786 [2024-07-25 19:07:35.471009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.786 [2024-07-25 19:07:35.471022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.786 [2024-07-25 19:07:35.471035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.786 [2024-07-25 19:07:35.471069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.786 qpair failed and we were unable to recover it. 00:34:23.786 [2024-07-25 19:07:35.480935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.786 [2024-07-25 19:07:35.481057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.786 [2024-07-25 19:07:35.481088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.786 [2024-07-25 19:07:35.481103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.786 [2024-07-25 19:07:35.481116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.786 [2024-07-25 19:07:35.481143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.786 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.490931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.491030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.491055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.491080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.491094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.491122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.501018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.501158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.501183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.501197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.501210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.501239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.511025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.511148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.511173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.511188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.511201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.511229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.521052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.521156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.521181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.521195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.521208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.521236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.531071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.531168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.531194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.531208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.531222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.531249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.541144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.541295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.541323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.541338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.541351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.541380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.551112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.551206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.551236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.551250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.551263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.551291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.561185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.561287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.561314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.561328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.561345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.561375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.571180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.571273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.571299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.571313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.571326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.571354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.581200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.581300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.581326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.581340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.581353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.581380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.591269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.591378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.591403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.591418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.591431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.591464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.601265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.601368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.601393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.601407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.601420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.601448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.611274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.787 [2024-07-25 19:07:35.611375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.787 [2024-07-25 19:07:35.611401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.787 [2024-07-25 19:07:35.611415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.787 [2024-07-25 19:07:35.611428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.787 [2024-07-25 19:07:35.611456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.787 qpair failed and we were unable to recover it. 00:34:23.787 [2024-07-25 19:07:35.621330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.788 [2024-07-25 19:07:35.621432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.788 [2024-07-25 19:07:35.621457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.788 [2024-07-25 19:07:35.621471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.788 [2024-07-25 19:07:35.621484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.788 [2024-07-25 19:07:35.621512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.788 qpair failed and we were unable to recover it. 00:34:23.788 [2024-07-25 19:07:35.631381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.788 [2024-07-25 19:07:35.631488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.788 [2024-07-25 19:07:35.631513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.788 [2024-07-25 19:07:35.631528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.788 [2024-07-25 19:07:35.631541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.788 [2024-07-25 19:07:35.631568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.788 qpair failed and we were unable to recover it. 00:34:23.788 [2024-07-25 19:07:35.641382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.788 [2024-07-25 19:07:35.641476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.788 [2024-07-25 19:07:35.641506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.788 [2024-07-25 19:07:35.641521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.788 [2024-07-25 19:07:35.641534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.788 [2024-07-25 19:07:35.641562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.788 qpair failed and we were unable to recover it. 00:34:23.788 [2024-07-25 19:07:35.651414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.788 [2024-07-25 19:07:35.651503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.788 [2024-07-25 19:07:35.651527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.788 [2024-07-25 19:07:35.651541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.788 [2024-07-25 19:07:35.651554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.788 [2024-07-25 19:07:35.651581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.788 qpair failed and we were unable to recover it. 00:34:23.788 [2024-07-25 19:07:35.661433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.788 [2024-07-25 19:07:35.661548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.788 [2024-07-25 19:07:35.661572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.788 [2024-07-25 19:07:35.661586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.788 [2024-07-25 19:07:35.661599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:23.788 [2024-07-25 19:07:35.661627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.788 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.671507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.671609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.671634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.671648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.671661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.671689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.681516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.681606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.681631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.681646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.681659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.681692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.691552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.691681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.691706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.691721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.691734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.691761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.701592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.701695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.701720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.701735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.701748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.701775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.711570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.711665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.711689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.711703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.711716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.711743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.721581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.721670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.721695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.721709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.721723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.721750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.731643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.731739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.731769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.731784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.731798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.731825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.741645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.741742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.741768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.741782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.741795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.741822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.751668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.751768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.751794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.751808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.751821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.751849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.761696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.761790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.761815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.761829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.761842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.761870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.771769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.771867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.771892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.771907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.771919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.047 [2024-07-25 19:07:35.771953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.047 qpair failed and we were unable to recover it. 00:34:24.047 [2024-07-25 19:07:35.781765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.047 [2024-07-25 19:07:35.781864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.047 [2024-07-25 19:07:35.781890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.047 [2024-07-25 19:07:35.781904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.047 [2024-07-25 19:07:35.781917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.781945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.791795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.791894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.791920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.791934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.791948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.791975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.801829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.801925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.801951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.801965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.801978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.802005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.811849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.811944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.811970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.811984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.811997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.812024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.821893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.821990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.822020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.822035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.822048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.822083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.831902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.832004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.832029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.832043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.832057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.832092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.841957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.842057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.842089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.842104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.842117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.842144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.851984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.852092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.852117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.852131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.852145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.852173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.862032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.862151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.862177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.862192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.862214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.862243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.872066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.872183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.872208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.872222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.872237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.872266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.882065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.882161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.882187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.882201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.882214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.882243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.892087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.892190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.892215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.892230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.892243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.892270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.902142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.902248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.902274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.902288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.902301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.902329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.912150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.048 [2024-07-25 19:07:35.912250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.048 [2024-07-25 19:07:35.912276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.048 [2024-07-25 19:07:35.912290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.048 [2024-07-25 19:07:35.912303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.048 [2024-07-25 19:07:35.912330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.048 qpair failed and we were unable to recover it. 00:34:24.048 [2024-07-25 19:07:35.922208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.049 [2024-07-25 19:07:35.922319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.049 [2024-07-25 19:07:35.922344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.049 [2024-07-25 19:07:35.922359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.049 [2024-07-25 19:07:35.922372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.049 [2024-07-25 19:07:35.922400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.049 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:35.932193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:35.932290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:35.932315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:35.932330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:35.932343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:35.932370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:35.942238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:35.942340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:35.942369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:35.942383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:35.942396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:35.942424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:35.952244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:35.952341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:35.952367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:35.952381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:35.952400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:35.952428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:35.962292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:35.962431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:35.962456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:35.962470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:35.962484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:35.962511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:35.972306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:35.972447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:35.972472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:35.972487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:35.972500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:35.972527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:35.982347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:35.982446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:35.982471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:35.982485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:35.982497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:35.982526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:35.992386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:35.992491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:35.992517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:35.992531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:35.992544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:35.992572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:36.002376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:36.002471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:36.002496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:36.002511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:36.002524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:36.002551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:36.012416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:36.012507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:36.012533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:36.012547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:36.012560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:36.012587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:36.022490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:36.022630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:36.022655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:36.022670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:36.022683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:36.022711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:36.032499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:36.032600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:36.032626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:36.032640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:36.032654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.308 [2024-07-25 19:07:36.032682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.308 qpair failed and we were unable to recover it. 00:34:24.308 [2024-07-25 19:07:36.042493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.308 [2024-07-25 19:07:36.042586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.308 [2024-07-25 19:07:36.042611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.308 [2024-07-25 19:07:36.042626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.308 [2024-07-25 19:07:36.042644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.042673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.052551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.052652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.052677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.052691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.052704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.052732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.062588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.062687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.062712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.062726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.062739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.062768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.072592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.072685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.072710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.072724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.072738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.072765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.082659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.082769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.082794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.082808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.082822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.082849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.092692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.092812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.092837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.092851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.092864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.092891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.102749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.102859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.102884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.102898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.102911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.102942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.112697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.112793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.112818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.112832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.112846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.112873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.122757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.122887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.122912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.122927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.122939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.122966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.132781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.132895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.132920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.132940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.132954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.132982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.142789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.142888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.142912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.142925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.142937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.142964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.152811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.152905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.152931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.152944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.152958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.152985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.162844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.162939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.162964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.162978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.162991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.163019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.309 [2024-07-25 19:07:36.172884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.309 [2024-07-25 19:07:36.172978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.309 [2024-07-25 19:07:36.173003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.309 [2024-07-25 19:07:36.173017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.309 [2024-07-25 19:07:36.173030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.309 [2024-07-25 19:07:36.173064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.309 qpair failed and we were unable to recover it. 00:34:24.310 [2024-07-25 19:07:36.182928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.310 [2024-07-25 19:07:36.183028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.310 [2024-07-25 19:07:36.183054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.310 [2024-07-25 19:07:36.183077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.310 [2024-07-25 19:07:36.183091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.310 [2024-07-25 19:07:36.183119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.310 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.192955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.193066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.193092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.193106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.193120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.193148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.202953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.203047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.203078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.203094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.203107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.203135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.212973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.213070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.213096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.213111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.213124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.213152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.223032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.223195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.223221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.223244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.223259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.223287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.233075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.233220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.233245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.233259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.233273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.233301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.243096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.243196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.243222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.243236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.243249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.243277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.253139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.253234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.253259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.253273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.253286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.253314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.263214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.263349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.263374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.263388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.263401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.263429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.273210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.273316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.273343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.273358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.273371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.273399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.283211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.283310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.283337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.283351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.283365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.283393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.293223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.293319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.293345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.571 [2024-07-25 19:07:36.293359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.571 [2024-07-25 19:07:36.293373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.571 [2024-07-25 19:07:36.293401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.571 qpair failed and we were unable to recover it. 00:34:24.571 [2024-07-25 19:07:36.303266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.571 [2024-07-25 19:07:36.303366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.571 [2024-07-25 19:07:36.303391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.303406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.303419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.303447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.313287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.313390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.313415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.313435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.313449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.313479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.323332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.323423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.323448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.323463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.323476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.323503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.333381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.333502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.333528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.333542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.333557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.333585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.343355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.343454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.343479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.343493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.343507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.343534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.353391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.353483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.353508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.353523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.353536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.353564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.363415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.363511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.363536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.363550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.363563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.363591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.373438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.373531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.373556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.373570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.373583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.373611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.383516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.383615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.383640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.383654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.383667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.383696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.393518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.393608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.393634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.393648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.393661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.393688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.403560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.403655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.403685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.403700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.403713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.403741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.413578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.413671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.413697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.413712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.413725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.413753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.423630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.423737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.572 [2024-07-25 19:07:36.423762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.572 [2024-07-25 19:07:36.423777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.572 [2024-07-25 19:07:36.423790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.572 [2024-07-25 19:07:36.423818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.572 qpair failed and we were unable to recover it. 00:34:24.572 [2024-07-25 19:07:36.433640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.572 [2024-07-25 19:07:36.433731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.573 [2024-07-25 19:07:36.433756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.573 [2024-07-25 19:07:36.433770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.573 [2024-07-25 19:07:36.433782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.573 [2024-07-25 19:07:36.433811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.573 qpair failed and we were unable to recover it. 00:34:24.573 [2024-07-25 19:07:36.443688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.573 [2024-07-25 19:07:36.443805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.573 [2024-07-25 19:07:36.443830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.573 [2024-07-25 19:07:36.443844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.573 [2024-07-25 19:07:36.443859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.573 [2024-07-25 19:07:36.443887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.573 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-25 19:07:36.453663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.834 [2024-07-25 19:07:36.453759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.834 [2024-07-25 19:07:36.453784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.834 [2024-07-25 19:07:36.453799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.834 [2024-07-25 19:07:36.453812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.834 [2024-07-25 19:07:36.453839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-25 19:07:36.463737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.834 [2024-07-25 19:07:36.463838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.834 [2024-07-25 19:07:36.463863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.834 [2024-07-25 19:07:36.463878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.834 [2024-07-25 19:07:36.463891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.834 [2024-07-25 19:07:36.463919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-25 19:07:36.473812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.834 [2024-07-25 19:07:36.473941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.834 [2024-07-25 19:07:36.473967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.834 [2024-07-25 19:07:36.473981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.834 [2024-07-25 19:07:36.473994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.834 [2024-07-25 19:07:36.474021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-25 19:07:36.483751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.834 [2024-07-25 19:07:36.483844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.834 [2024-07-25 19:07:36.483869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.834 [2024-07-25 19:07:36.483883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.834 [2024-07-25 19:07:36.483896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.483924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.493780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.493869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.493898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.493913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.493926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.493953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.503813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.503913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.503939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.503954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.503967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.503995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.513840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.513941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.513966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.513981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.513994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.514022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.523887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.523983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.524008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.524023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.524036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.524070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.533881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.533982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.534007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.534021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.534034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.534074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.543947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.544048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.544080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.544095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.544108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.544137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.553956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.554068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.554094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.554109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.554122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.554150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.563998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.564100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.564126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.564140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.564153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.564181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.574013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.574120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.574146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.574160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.574173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.574201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.584043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.584153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.584183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.584198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.584211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.584238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.594115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.594213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.594238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.594252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.594265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.594293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.604093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.604189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.604214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.604228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.604241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.604269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.614127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.614222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.614247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.614262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.614275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.614302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.624179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.624331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.624356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.624370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.624383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.624420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.634183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.634277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.634301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.634315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.634328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.835 [2024-07-25 19:07:36.634356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-25 19:07:36.644221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.835 [2024-07-25 19:07:36.644316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.835 [2024-07-25 19:07:36.644342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.835 [2024-07-25 19:07:36.644357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.835 [2024-07-25 19:07:36.644370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.836 [2024-07-25 19:07:36.644398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-25 19:07:36.654298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.836 [2024-07-25 19:07:36.654391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.836 [2024-07-25 19:07:36.654416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.836 [2024-07-25 19:07:36.654431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.836 [2024-07-25 19:07:36.654444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.836 [2024-07-25 19:07:36.654474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-25 19:07:36.664333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.836 [2024-07-25 19:07:36.664461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.836 [2024-07-25 19:07:36.664486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.836 [2024-07-25 19:07:36.664500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.836 [2024-07-25 19:07:36.664513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.836 [2024-07-25 19:07:36.664541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-25 19:07:36.674348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.836 [2024-07-25 19:07:36.674473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.836 [2024-07-25 19:07:36.674503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.836 [2024-07-25 19:07:36.674521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.836 [2024-07-25 19:07:36.674534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.836 [2024-07-25 19:07:36.674562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-25 19:07:36.684381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.836 [2024-07-25 19:07:36.684482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.836 [2024-07-25 19:07:36.684507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.836 [2024-07-25 19:07:36.684522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.836 [2024-07-25 19:07:36.684535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.836 [2024-07-25 19:07:36.684562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-25 19:07:36.694350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.836 [2024-07-25 19:07:36.694448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.836 [2024-07-25 19:07:36.694474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.836 [2024-07-25 19:07:36.694488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.836 [2024-07-25 19:07:36.694502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.836 [2024-07-25 19:07:36.694529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-25 19:07:36.704434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.836 [2024-07-25 19:07:36.704550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.836 [2024-07-25 19:07:36.704576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.836 [2024-07-25 19:07:36.704590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.836 [2024-07-25 19:07:36.704603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:24.836 [2024-07-25 19:07:36.704630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.836 qpair failed and we were unable to recover it. 00:34:25.096 [2024-07-25 19:07:36.714416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.096 [2024-07-25 19:07:36.714514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.096 [2024-07-25 19:07:36.714540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.096 [2024-07-25 19:07:36.714555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.096 [2024-07-25 19:07:36.714569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.096 [2024-07-25 19:07:36.714602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.096 qpair failed and we were unable to recover it. 00:34:25.096 [2024-07-25 19:07:36.724442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.096 [2024-07-25 19:07:36.724538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.096 [2024-07-25 19:07:36.724563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.096 [2024-07-25 19:07:36.724577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.096 [2024-07-25 19:07:36.724590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.096 [2024-07-25 19:07:36.724618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.096 qpair failed and we were unable to recover it. 00:34:25.096 [2024-07-25 19:07:36.734521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.096 [2024-07-25 19:07:36.734627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.096 [2024-07-25 19:07:36.734652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.096 [2024-07-25 19:07:36.734667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.096 [2024-07-25 19:07:36.734680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.096 [2024-07-25 19:07:36.734708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.096 qpair failed and we were unable to recover it. 00:34:25.096 [2024-07-25 19:07:36.744498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.096 [2024-07-25 19:07:36.744593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.096 [2024-07-25 19:07:36.744618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.096 [2024-07-25 19:07:36.744633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.096 [2024-07-25 19:07:36.744645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.096 [2024-07-25 19:07:36.744672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.096 qpair failed and we were unable to recover it. 00:34:25.096 [2024-07-25 19:07:36.754534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.096 [2024-07-25 19:07:36.754632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.096 [2024-07-25 19:07:36.754656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.096 [2024-07-25 19:07:36.754671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.096 [2024-07-25 19:07:36.754684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.096 [2024-07-25 19:07:36.754713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.096 qpair failed and we were unable to recover it. 00:34:25.096 [2024-07-25 19:07:36.764582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.096 [2024-07-25 19:07:36.764691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.096 [2024-07-25 19:07:36.764721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.096 [2024-07-25 19:07:36.764736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.764750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.764777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.774589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.774684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.774709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.774723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.774736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.774764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.784641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.784739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.784764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.784779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.784792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.784820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.794693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.794814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.794839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.794853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.794866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.794893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.804684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.804784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.804810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.804824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.804842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.804871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.814715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.814810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.814836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.814850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.814862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.814890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.824733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.824893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.824920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.824935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.824951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.824981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.834786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.834937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.834962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.834977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.834990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.835017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.844792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.844889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.844914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.844929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.844942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.844969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.854873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.855025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.855050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.855072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.855086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.855116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.864844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.864947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.864972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.864986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.864999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.865028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.874887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.874988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.875014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.875028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.875041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.875075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.884933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.885035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.885067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.097 [2024-07-25 19:07:36.885084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.097 [2024-07-25 19:07:36.885097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.097 [2024-07-25 19:07:36.885124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.097 qpair failed and we were unable to recover it. 00:34:25.097 [2024-07-25 19:07:36.894917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.097 [2024-07-25 19:07:36.895020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.097 [2024-07-25 19:07:36.895044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.098 [2024-07-25 19:07:36.895065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.098 [2024-07-25 19:07:36.895087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.098 [2024-07-25 19:07:36.895116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.098 qpair failed and we were unable to recover it. 00:34:25.098 [2024-07-25 19:07:36.904974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.098 [2024-07-25 19:07:36.905093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.098 [2024-07-25 19:07:36.905128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.098 [2024-07-25 19:07:36.905143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.098 [2024-07-25 19:07:36.905156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.098 [2024-07-25 19:07:36.905184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.098 qpair failed and we were unable to recover it. 00:34:25.098 [2024-07-25 19:07:36.914984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.098 [2024-07-25 19:07:36.915088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.098 [2024-07-25 19:07:36.915118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.098 [2024-07-25 19:07:36.915132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.098 [2024-07-25 19:07:36.915145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.098 [2024-07-25 19:07:36.915173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.098 qpair failed and we were unable to recover it. 00:34:25.098 [2024-07-25 19:07:36.925048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.098 [2024-07-25 19:07:36.925176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.098 [2024-07-25 19:07:36.925202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.098 [2024-07-25 19:07:36.925217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.098 [2024-07-25 19:07:36.925231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.098 [2024-07-25 19:07:36.925258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.098 qpair failed and we were unable to recover it. 00:34:25.098 [2024-07-25 19:07:36.935086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.098 [2024-07-25 19:07:36.935231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.098 [2024-07-25 19:07:36.935256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.098 [2024-07-25 19:07:36.935270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.098 [2024-07-25 19:07:36.935283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.098 [2024-07-25 19:07:36.935311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.098 qpair failed and we were unable to recover it. 00:34:25.098 [2024-07-25 19:07:36.945117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.098 [2024-07-25 19:07:36.945274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.098 [2024-07-25 19:07:36.945300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.098 [2024-07-25 19:07:36.945314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.098 [2024-07-25 19:07:36.945327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.098 [2024-07-25 19:07:36.945355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.098 qpair failed and we were unable to recover it. 00:34:25.098 [2024-07-25 19:07:36.955120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.098 [2024-07-25 19:07:36.955222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.098 [2024-07-25 19:07:36.955247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.098 [2024-07-25 19:07:36.955262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.098 [2024-07-25 19:07:36.955275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.098 [2024-07-25 19:07:36.955302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.098 qpair failed and we were unable to recover it. 00:34:25.098 [2024-07-25 19:07:36.965164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.098 [2024-07-25 19:07:36.965278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.098 [2024-07-25 19:07:36.965303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.098 [2024-07-25 19:07:36.965317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.098 [2024-07-25 19:07:36.965330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.098 [2024-07-25 19:07:36.965358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.098 qpair failed and we were unable to recover it. 00:34:25.357 [2024-07-25 19:07:36.975193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.357 [2024-07-25 19:07:36.975289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.357 [2024-07-25 19:07:36.975314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:36.975329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:36.975342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:36.975370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:36.985214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:36.985314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:36.985343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:36.985357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:36.985376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:36.985404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:36.995246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:36.995340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:36.995366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:36.995380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:36.995393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:36.995420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.005335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.005438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.005463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.005478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.005491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.005518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.015275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.015403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.015428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.015443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.015456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.015483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.025390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.025494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.025521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.025536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.025549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.025577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.035317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.035408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.035433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.035448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.035460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.035488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.045361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.045455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.045481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.045495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.045508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.045536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.055398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.055500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.055525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.055539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.055552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.055581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.065424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.065524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.065550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.065564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.065577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.065605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.075536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.075653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.075681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.075703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.075717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.075748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.085474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.085568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.085594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.085609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.085623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.085651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.095510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.095605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.095631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.095645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.095657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.358 [2024-07-25 19:07:37.095685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.358 qpair failed and we were unable to recover it. 00:34:25.358 [2024-07-25 19:07:37.105544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.358 [2024-07-25 19:07:37.105642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.358 [2024-07-25 19:07:37.105668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.358 [2024-07-25 19:07:37.105683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.358 [2024-07-25 19:07:37.105696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.105723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.115544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.115640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.115665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.115680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.115693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.115720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.125592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.125691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.125717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.125731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.125745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.125773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.135655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.135752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.135778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.135792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.135805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.135833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.145668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.145769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.145793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.145807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.145819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.145845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.155687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.155791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.155816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.155832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.155846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.155874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.165722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.165846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.165870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.165890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.165903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.165931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.175744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.175838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.175863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.175877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.175890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.175918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.185780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.185885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.185911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.185926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.185938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.185966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.195796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.195890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.195916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.195930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.195943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.195972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.205841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.205937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.205963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.205977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.205991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.206018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.215831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.215929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.215955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.215969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.215982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.216010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.359 [2024-07-25 19:07:37.225875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.359 [2024-07-25 19:07:37.226000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.359 [2024-07-25 19:07:37.226025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.359 [2024-07-25 19:07:37.226039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.359 [2024-07-25 19:07:37.226052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.359 [2024-07-25 19:07:37.226087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.359 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.235923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.236020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.236046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.236067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.236082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.236111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.245985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.246101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.246126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.246141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.246154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.246183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.255962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.256057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.256091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.256112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.256126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.256154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.266034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.266187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.266213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.266227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.266240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.266268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.276052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.276164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.276189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.276203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.276216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.276244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.286025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.286133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.286159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.286173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.286186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.286214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.296089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.296196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.296225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.296241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.296254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.296284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.306105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.306210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.306235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.306249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.306262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.306291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.316111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.619 [2024-07-25 19:07:37.316212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.619 [2024-07-25 19:07:37.316237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.619 [2024-07-25 19:07:37.316252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.619 [2024-07-25 19:07:37.316265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.619 [2024-07-25 19:07:37.316292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.619 qpair failed and we were unable to recover it. 00:34:25.619 [2024-07-25 19:07:37.326155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.326252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.326277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.326292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.326305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.326332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.336158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.336255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.336281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.336296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.336309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.336338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.346201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.346302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.346332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.346347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.346360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.346388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.356228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.356325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.356351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.356366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.356380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.356407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.366277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.366409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.366435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.366449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.366463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.366490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.376323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.376425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.376450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.376465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.376478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.376506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.386359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.386469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.386494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.386508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.386521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.386550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.396401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.396503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.396529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.396543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.396556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.396584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.406353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.406452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.406478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.406492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.406505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.406533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.416385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.416487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.416513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.416527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.416539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.416568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.426459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.426559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.426585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.426599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.426613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.426640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.436448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.436549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.436579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.436594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.436607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.436636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.446480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.446594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.446620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.620 [2024-07-25 19:07:37.446634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.620 [2024-07-25 19:07:37.446646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.620 [2024-07-25 19:07:37.446674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.620 qpair failed and we were unable to recover it. 00:34:25.620 [2024-07-25 19:07:37.456489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.620 [2024-07-25 19:07:37.456600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.620 [2024-07-25 19:07:37.456625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.621 [2024-07-25 19:07:37.456639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.621 [2024-07-25 19:07:37.456652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.621 [2024-07-25 19:07:37.456679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.621 qpair failed and we were unable to recover it. 00:34:25.621 [2024-07-25 19:07:37.466556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.621 [2024-07-25 19:07:37.466675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.621 [2024-07-25 19:07:37.466700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.621 [2024-07-25 19:07:37.466714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.621 [2024-07-25 19:07:37.466727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.621 [2024-07-25 19:07:37.466755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.621 qpair failed and we were unable to recover it. 00:34:25.621 [2024-07-25 19:07:37.476583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.621 [2024-07-25 19:07:37.476699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.621 [2024-07-25 19:07:37.476725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.621 [2024-07-25 19:07:37.476739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.621 [2024-07-25 19:07:37.476752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.621 [2024-07-25 19:07:37.476785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.621 qpair failed and we were unable to recover it. 00:34:25.621 [2024-07-25 19:07:37.486637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.621 [2024-07-25 19:07:37.486755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.621 [2024-07-25 19:07:37.486780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.621 [2024-07-25 19:07:37.486794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.621 [2024-07-25 19:07:37.486807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.621 [2024-07-25 19:07:37.486835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.621 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.496708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.496841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.496867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.496882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.496895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.496922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.506689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.506794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.506820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.506834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.506847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.506875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.516653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.516752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.516777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.516791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.516804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.516832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.526692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.526792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.526822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.526837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.526850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.526878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.536738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.536852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.536878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.536892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.536905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.536933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.546805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.546909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.546935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.546949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.546962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.546990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.556782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.556880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.556905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.556920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.556933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.556960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.566811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.566906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.566932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.566946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.566959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.566993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.576823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.576915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.576941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.576957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.576970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.576999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.586879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.587014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.587039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.587053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.880 [2024-07-25 19:07:37.587073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.880 [2024-07-25 19:07:37.587103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-07-25 19:07:37.596915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.880 [2024-07-25 19:07:37.597016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.880 [2024-07-25 19:07:37.597041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.880 [2024-07-25 19:07:37.597056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.597077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.597107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.606909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.607048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.607080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.607096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.607109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.607137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.616976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.617092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.617122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.617137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.617151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.617178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.626988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.627101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.627127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.627141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.627154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.627182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.637074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.637171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.637200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.637215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.637228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.637257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.647055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.647194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.647219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.647234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.647247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.647275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.657114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.657213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.657238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.657253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.657265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.657299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.667092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.667191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.667217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.667232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.667244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.667272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.677128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.677228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.677253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.677267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.677280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.677308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.687189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.687285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.687310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.687324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.687337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.687364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.697193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.697285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.697310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.697324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.697337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.697365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.707233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.707344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.707373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.707388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.707401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.707428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.717240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.717335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.717361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.717375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.717388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.717416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-07-25 19:07:37.727251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.881 [2024-07-25 19:07:37.727344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.881 [2024-07-25 19:07:37.727369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.881 [2024-07-25 19:07:37.727383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.881 [2024-07-25 19:07:37.727396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.881 [2024-07-25 19:07:37.727424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.882 [2024-07-25 19:07:37.737303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.882 [2024-07-25 19:07:37.737404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.882 [2024-07-25 19:07:37.737429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.882 [2024-07-25 19:07:37.737444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.882 [2024-07-25 19:07:37.737457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.882 [2024-07-25 19:07:37.737484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-07-25 19:07:37.747337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.882 [2024-07-25 19:07:37.747442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.882 [2024-07-25 19:07:37.747467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.882 [2024-07-25 19:07:37.747481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.882 [2024-07-25 19:07:37.747500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:25.882 [2024-07-25 19:07:37.747528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.882 qpair failed and we were unable to recover it. 00:34:26.141 [2024-07-25 19:07:37.757351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.141 [2024-07-25 19:07:37.757452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.141 [2024-07-25 19:07:37.757477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.141 [2024-07-25 19:07:37.757492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.141 [2024-07-25 19:07:37.757505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.141 [2024-07-25 19:07:37.757533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.141 qpair failed and we were unable to recover it. 00:34:26.141 [2024-07-25 19:07:37.767417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.141 [2024-07-25 19:07:37.767556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.141 [2024-07-25 19:07:37.767582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.141 [2024-07-25 19:07:37.767596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.141 [2024-07-25 19:07:37.767609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.141 [2024-07-25 19:07:37.767636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.141 qpair failed and we were unable to recover it. 00:34:26.141 [2024-07-25 19:07:37.777420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.777552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.777578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.777592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.777606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.777633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.787467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.787579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.787603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.787618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.787631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.787658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.797528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.797633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.797658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.797673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.797686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.797713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.807531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.807661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.807687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.807701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.807714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.807742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.817513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.817636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.817661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.817676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.817689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.817716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.827569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.827664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.827689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.827703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.827716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.827744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.837624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.837751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.837776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.837790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.837808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.837837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.847607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.847742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.847766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.847780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.847793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.847820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.857662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.857780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.857805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.857819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.857833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.857860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.867679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.867833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.867861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.867877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.867890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.867919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.877741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.877883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.877911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.877927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.877940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.877969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.887724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.887821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.887847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.887862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.887875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.887903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.142 [2024-07-25 19:07:37.897765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.142 [2024-07-25 19:07:37.897905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.142 [2024-07-25 19:07:37.897931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.142 [2024-07-25 19:07:37.897945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.142 [2024-07-25 19:07:37.897958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.142 [2024-07-25 19:07:37.897987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.142 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.907800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.907909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.907934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.907948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.907961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.907989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.917852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.917968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.917993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.918008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.918021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.918048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.927871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.927984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.928010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.928024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.928042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.928077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.937863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.937957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.937983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.937998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.938012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.938041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.947911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.948028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.948053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.948075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.948090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.948118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.957936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.958046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.958078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.958093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.958107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.958135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.967952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.968092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.968118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.968133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.968146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.968174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.977997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.978114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.978141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.978155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.978172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.978202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.988006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.988114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.988141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.988155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.988169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.988197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:37.998023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:37.998123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:37.998149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:37.998163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:37.998176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:37.998205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.143 [2024-07-25 19:07:38.008057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.143 [2024-07-25 19:07:38.008191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.143 [2024-07-25 19:07:38.008217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.143 [2024-07-25 19:07:38.008231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.143 [2024-07-25 19:07:38.008245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.143 [2024-07-25 19:07:38.008273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.143 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.018078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.018176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.018201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.018221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.018235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.018263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.028117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.028218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.028242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.028257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.028270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.028298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.038151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.038261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.038286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.038301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.038314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.038341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.048165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.048263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.048288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.048303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.048316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.048343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.058235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.058339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.058365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.058379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.058392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.058419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.068269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.068373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.068398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.068413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.068426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.068454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.078248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.078374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.078400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.078415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.078428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.078455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.088334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.088436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.088461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.088476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.088489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.088517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.098334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.098435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.098460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.098475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.098487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.098514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.108379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.108496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.108530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.108550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.108565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.108595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.118379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.118493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.118519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.118535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.118549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.118576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.128419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.128530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.128556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.128570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.128583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.406 [2024-07-25 19:07:38.128612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.406 qpair failed and we were unable to recover it. 00:34:26.406 [2024-07-25 19:07:38.138446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.406 [2024-07-25 19:07:38.138537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.406 [2024-07-25 19:07:38.138563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.406 [2024-07-25 19:07:38.138578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.406 [2024-07-25 19:07:38.138591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.138619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.148470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.148572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.148596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.148610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.148622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.148649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.158485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.158604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.158632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.158647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.158663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.158694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.168534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.168629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.168655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.168669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.168682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.168710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.178566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.178666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.178692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.178706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.178719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.178747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.188575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.188673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.188698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.188713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.188726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.188753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.198641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.198742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.198767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.198791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.198805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.198832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.208607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.208732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.208757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.208771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.208784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.208812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.218684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.218795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.218820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.218835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.218848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.218876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.228729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.228836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.228861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.228875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.228888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.228916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.238735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.238853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.238878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.238892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.238905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795570 00:34:26.407 [2024-07-25 19:07:38.238932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.248776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.248897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.248929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.248945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.248959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50f8000b90 00:34:26.407 [2024-07-25 19:07:38.248990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.258822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.258926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.258955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.258969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.258983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50f8000b90 00:34:26.407 [2024-07-25 19:07:38.259012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.268855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.407 [2024-07-25 19:07:38.268994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.407 [2024-07-25 19:07:38.269027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.407 [2024-07-25 19:07:38.269043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.407 [2024-07-25 19:07:38.269057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5100000b90 00:34:26.407 [2024-07-25 19:07:38.269097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:26.407 qpair failed and we were unable to recover it. 00:34:26.407 [2024-07-25 19:07:38.278831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.408 [2024-07-25 19:07:38.278931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.408 [2024-07-25 19:07:38.278958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.408 [2024-07-25 19:07:38.278975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.408 [2024-07-25 19:07:38.278989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5100000b90 00:34:26.408 [2024-07-25 19:07:38.279019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:26.408 qpair failed and we were unable to recover it. 00:34:26.669 [2024-07-25 19:07:38.288851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.669 [2024-07-25 19:07:38.288975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.669 [2024-07-25 19:07:38.289013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.669 [2024-07-25 19:07:38.289031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.669 [2024-07-25 19:07:38.289046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50f0000b90 00:34:26.669 [2024-07-25 19:07:38.289084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.669 qpair failed and we were unable to recover it. 00:34:26.669 [2024-07-25 19:07:38.298883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.669 [2024-07-25 19:07:38.299004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.669 [2024-07-25 19:07:38.299032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.669 [2024-07-25 19:07:38.299047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.669 [2024-07-25 19:07:38.299068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50f0000b90 00:34:26.669 [2024-07-25 19:07:38.299102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.669 qpair failed and we were unable to recover it. 00:34:26.669 Controller properly reset. 00:34:26.669 Initializing NVMe Controllers 00:34:26.669 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:26.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:26.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:26.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:26.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:26.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:26.669 Initialization complete. Launching workers. 00:34:26.669 Starting thread on core 1 00:34:26.669 Starting thread on core 2 00:34:26.669 Starting thread on core 3 00:34:26.669 Starting thread on core 0 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:26.669 00:34:26.669 real 0m10.754s 00:34:26.669 user 0m18.704s 00:34:26.669 sys 0m5.355s 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.669 ************************************ 00:34:26.669 END TEST nvmf_target_disconnect_tc2 00:34:26.669 ************************************ 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:26.669 rmmod nvme_tcp 00:34:26.669 rmmod nvme_fabrics 00:34:26.669 rmmod nvme_keyring 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3692583 ']' 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3692583 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3692583 ']' 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3692583 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3692583 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3692583' 00:34:26.669 killing process with pid 3692583 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3692583 00:34:26.669 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3692583 00:34:26.929 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:26.929 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:26.929 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:26.929 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:26.929 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:26.929 19:07:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.929 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:26.929 19:07:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.467 19:07:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:29.467 00:34:29.467 real 0m15.433s 00:34:29.467 user 0m44.871s 00:34:29.467 sys 0m7.223s 00:34:29.467 19:07:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:29.467 19:07:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:29.467 ************************************ 00:34:29.467 END TEST nvmf_target_disconnect 00:34:29.467 ************************************ 00:34:29.467 19:07:40 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:29.467 19:07:40 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.467 19:07:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.467 19:07:40 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:29.467 00:34:29.467 real 27m3.049s 00:34:29.467 user 74m27.876s 00:34:29.467 sys 6m21.162s 00:34:29.467 19:07:40 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:29.467 19:07:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.467 ************************************ 00:34:29.467 END TEST nvmf_tcp 00:34:29.467 ************************************ 00:34:29.467 19:07:40 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:29.467 19:07:40 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.467 19:07:40 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:29.467 19:07:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:29.467 19:07:40 -- common/autotest_common.sh@10 -- # set +x 00:34:29.467 ************************************ 00:34:29.467 START TEST spdkcli_nvmf_tcp 00:34:29.467 ************************************ 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:29.467 * Looking for test storage... 00:34:29.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:29.467 19:07:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3693780 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3693780 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3693780 ']' 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:29.468 19:07:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.468 [2024-07-25 19:07:41.020977] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:29.468 [2024-07-25 19:07:41.021053] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3693780 ] 00:34:29.468 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.468 [2024-07-25 19:07:41.077347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:29.468 [2024-07-25 19:07:41.161933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.468 [2024-07-25 19:07:41.161937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.468 19:07:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:29.468 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:29.468 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:29.468 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:29.468 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:29.468 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:29.468 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:29.468 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.468 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.468 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:29.468 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:29.468 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:29.468 ' 00:34:31.998 [2024-07-25 19:07:43.819527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.375 [2024-07-25 19:07:45.035808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:35.943 [2024-07-25 19:07:47.310955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:37.847 [2024-07-25 19:07:49.281370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:39.224 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:39.224 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:39.224 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:39.224 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:39.225 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:39.225 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:39.225 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:39.225 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.225 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.225 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:39.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:39.225 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:39.225 19:07:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:39.225 19:07:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.225 19:07:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.225 19:07:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:39.225 19:07:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.225 19:07:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.225 19:07:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:39.225 19:07:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:39.483 19:07:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:39.483 19:07:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:39.483 19:07:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:39.483 19:07:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.483 19:07:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.483 19:07:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:39.483 19:07:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:39.483 19:07:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.743 19:07:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:39.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:39.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:39.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:39.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:39.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:39.743 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:39.743 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:39.743 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:39.743 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:39.743 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:39.743 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:39.743 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:39.743 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:39.743 ' 00:34:45.014 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:45.014 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:45.014 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:45.014 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:45.014 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:45.014 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:45.014 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:45.014 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:45.015 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:45.015 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:45.015 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:45.015 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:45.015 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:45.015 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3693780 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3693780 ']' 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3693780 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3693780 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3693780' 00:34:45.015 killing process with pid 3693780 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3693780 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3693780 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3693780 ']' 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3693780 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3693780 ']' 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3693780 00:34:45.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3693780) - No such process 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3693780 is not found' 00:34:45.015 Process with pid 3693780 is not found 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:45.015 00:34:45.015 real 0m15.912s 00:34:45.015 user 0m33.572s 00:34:45.015 sys 0m0.779s 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:45.015 19:07:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.015 ************************************ 00:34:45.015 END TEST spdkcli_nvmf_tcp 00:34:45.015 ************************************ 00:34:45.015 19:07:56 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:45.015 19:07:56 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:45.015 19:07:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:45.015 19:07:56 -- common/autotest_common.sh@10 -- # set +x 00:34:45.015 ************************************ 00:34:45.015 START TEST nvmf_identify_passthru 00:34:45.015 ************************************ 00:34:45.015 19:07:56 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:45.274 * Looking for test storage... 00:34:45.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:45.274 19:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.274 19:07:56 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.274 19:07:56 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.274 19:07:56 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.274 19:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.274 19:07:56 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.274 19:07:56 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.274 19:07:56 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:45.274 19:07:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.274 19:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.274 19:07:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:45.274 19:07:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:45.274 19:07:56 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:45.274 19:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:47.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:47.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.177 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:47.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:47.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:47.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:34:47.178 00:34:47.178 --- 10.0.0.2 ping statistics --- 00:34:47.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.178 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:34:47.178 00:34:47.178 --- 10.0.0.1 ping statistics --- 00:34:47.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.178 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:47.178 19:07:58 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:47.178 19:07:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.178 19:07:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:47.178 19:07:58 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:47.178 19:07:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:47.178 19:07:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:47.178 19:07:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:47.178 19:07:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:47.178 19:07:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:47.178 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.371 19:08:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:51.371 19:08:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:51.371 19:08:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:51.371 19:08:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:51.371 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.564 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:55.564 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.564 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.564 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3698269 00:34:55.564 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:55.564 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:55.564 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3698269 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3698269 ']' 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:55.564 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.564 [2024-07-25 19:08:07.414979] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:34:55.564 [2024-07-25 19:08:07.415094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.824 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.824 [2024-07-25 19:08:07.481393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:55.824 [2024-07-25 19:08:07.571456] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.824 [2024-07-25 19:08:07.571519] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.824 [2024-07-25 19:08:07.571557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.824 [2024-07-25 19:08:07.571568] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.824 [2024-07-25 19:08:07.571578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.824 [2024-07-25 19:08:07.571659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.824 [2024-07-25 19:08:07.571981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:55.824 [2024-07-25 19:08:07.572038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:55.824 [2024-07-25 19:08:07.572041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.824 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:55.824 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:55.824 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:55.824 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.824 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.824 INFO: Log level set to 20 00:34:55.824 INFO: Requests: 00:34:55.824 { 00:34:55.824 "jsonrpc": "2.0", 00:34:55.824 "method": "nvmf_set_config", 00:34:55.824 "id": 1, 00:34:55.824 "params": { 00:34:55.824 "admin_cmd_passthru": { 00:34:55.824 "identify_ctrlr": true 00:34:55.824 } 00:34:55.824 } 00:34:55.824 } 00:34:55.824 00:34:55.824 INFO: response: 00:34:55.824 { 00:34:55.824 "jsonrpc": "2.0", 00:34:55.824 "id": 1, 00:34:55.824 "result": true 00:34:55.824 } 00:34:55.824 00:34:55.824 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.824 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:55.824 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.824 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:55.824 INFO: Setting log level to 20 00:34:55.824 INFO: Setting log level to 20 00:34:55.824 INFO: Log level set to 20 00:34:55.824 INFO: Log level set to 20 00:34:55.824 INFO: Requests: 00:34:55.824 { 00:34:55.824 "jsonrpc": "2.0", 00:34:55.824 "method": "framework_start_init", 00:34:55.824 "id": 1 00:34:55.824 } 00:34:55.824 00:34:55.824 INFO: Requests: 00:34:55.824 { 00:34:55.824 "jsonrpc": "2.0", 00:34:55.824 "method": "framework_start_init", 00:34:55.824 "id": 1 00:34:55.824 } 00:34:55.824 00:34:56.082 [2024-07-25 19:08:07.735455] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:56.082 INFO: response: 00:34:56.082 { 00:34:56.082 "jsonrpc": "2.0", 00:34:56.082 "id": 1, 00:34:56.082 "result": true 00:34:56.082 } 00:34:56.082 00:34:56.082 INFO: response: 00:34:56.082 { 00:34:56.082 "jsonrpc": "2.0", 00:34:56.082 "id": 1, 00:34:56.082 "result": true 00:34:56.082 } 00:34:56.082 00:34:56.082 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.082 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:56.082 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.082 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.082 INFO: Setting log level to 40 00:34:56.082 INFO: Setting log level to 40 00:34:56.082 INFO: Setting log level to 40 00:34:56.082 [2024-07-25 19:08:07.745586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.082 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.082 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:56.082 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.082 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.082 19:08:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:56.082 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.082 19:08:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.361 Nvme0n1 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.361 [2024-07-25 19:08:10.638863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.361 [ 00:34:59.361 { 00:34:59.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:59.361 "subtype": "Discovery", 00:34:59.361 "listen_addresses": [], 00:34:59.361 "allow_any_host": true, 00:34:59.361 "hosts": [] 00:34:59.361 }, 00:34:59.361 { 00:34:59.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.361 "subtype": "NVMe", 00:34:59.361 "listen_addresses": [ 00:34:59.361 { 00:34:59.361 "trtype": "TCP", 00:34:59.361 "adrfam": "IPv4", 00:34:59.361 "traddr": "10.0.0.2", 00:34:59.361 "trsvcid": "4420" 00:34:59.361 } 00:34:59.361 ], 00:34:59.361 "allow_any_host": true, 00:34:59.361 "hosts": [], 00:34:59.361 "serial_number": "SPDK00000000000001", 00:34:59.361 "model_number": "SPDK bdev Controller", 00:34:59.361 "max_namespaces": 1, 00:34:59.361 "min_cntlid": 1, 00:34:59.361 "max_cntlid": 65519, 00:34:59.361 "namespaces": [ 00:34:59.361 { 00:34:59.361 "nsid": 1, 00:34:59.361 "bdev_name": "Nvme0n1", 00:34:59.361 "name": "Nvme0n1", 00:34:59.361 "nguid": "0B9B5A5DB3D3439087256A1FAD218A61", 00:34:59.361 "uuid": "0b9b5a5d-b3d3-4390-8725-6a1fad218a61" 00:34:59.361 } 00:34:59.361 ] 00:34:59.361 } 00:34:59.361 ] 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:59.361 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:59.361 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:59.361 19:08:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:59.361 rmmod nvme_tcp 00:34:59.361 rmmod nvme_fabrics 00:34:59.361 rmmod nvme_keyring 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3698269 ']' 00:34:59.361 19:08:10 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3698269 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3698269 ']' 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3698269 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3698269 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3698269' 00:34:59.361 killing process with pid 3698269 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3698269 00:34:59.361 19:08:10 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3698269 00:35:00.737 19:08:12 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:00.737 19:08:12 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:00.737 19:08:12 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:00.737 19:08:12 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:00.737 19:08:12 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:00.737 19:08:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.737 19:08:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:00.737 19:08:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.272 19:08:14 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:03.272 00:35:03.272 real 0m17.700s 00:35:03.272 user 0m26.163s 00:35:03.272 sys 0m2.193s 00:35:03.272 19:08:14 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:03.272 19:08:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.272 ************************************ 00:35:03.272 END TEST nvmf_identify_passthru 00:35:03.272 ************************************ 00:35:03.272 19:08:14 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:03.272 19:08:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:03.272 19:08:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:03.272 19:08:14 -- common/autotest_common.sh@10 -- # set +x 00:35:03.272 ************************************ 00:35:03.272 START TEST nvmf_dif 00:35:03.272 ************************************ 00:35:03.272 19:08:14 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:03.272 * Looking for test storage... 00:35:03.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.272 19:08:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.272 19:08:14 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.273 19:08:14 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.273 19:08:14 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.273 19:08:14 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.273 19:08:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.273 19:08:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.273 19:08:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.273 19:08:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:03.273 19:08:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:03.273 19:08:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:03.273 19:08:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:03.273 19:08:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:03.273 19:08:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:03.273 19:08:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.273 19:08:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:03.273 19:08:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:03.273 19:08:14 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:03.273 19:08:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:05.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:05.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:05.222 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:05.222 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.222 19:08:16 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:05.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:35:05.223 00:35:05.223 --- 10.0.0.2 ping statistics --- 00:35:05.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.223 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:35:05.223 00:35:05.223 --- 10.0.0.1 ping statistics --- 00:35:05.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.223 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:05.223 19:08:16 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:06.159 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:06.159 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:06.159 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:06.159 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:06.159 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:06.159 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:06.159 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:06.159 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:06.159 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:06.159 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:06.159 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:06.159 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:06.159 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:06.159 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:06.159 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:06.159 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:06.159 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:06.159 19:08:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:06.159 19:08:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:06.159 19:08:17 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:06.159 19:08:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3701404 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:06.159 19:08:17 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3701404 00:35:06.159 19:08:17 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3701404 ']' 00:35:06.159 19:08:17 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.159 19:08:17 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:06.159 19:08:17 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.159 19:08:17 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:06.159 19:08:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.419 [2024-07-25 19:08:18.041179] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:35:06.419 [2024-07-25 19:08:18.041264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:06.419 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.419 [2024-07-25 19:08:18.110410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.419 [2024-07-25 19:08:18.206098] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.419 [2024-07-25 19:08:18.206156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.419 [2024-07-25 19:08:18.206181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.419 [2024-07-25 19:08:18.206193] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.419 [2024-07-25 19:08:18.206211] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.419 [2024-07-25 19:08:18.206245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:06.680 19:08:18 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.680 19:08:18 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:06.680 19:08:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:06.680 19:08:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.680 [2024-07-25 19:08:18.353998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.680 19:08:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:06.680 19:08:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.680 ************************************ 00:35:06.680 START TEST fio_dif_1_default 00:35:06.680 ************************************ 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:06.680 bdev_null0 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:06.680 [2024-07-25 19:08:18.414332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:06.680 { 00:35:06.680 "params": { 00:35:06.680 "name": "Nvme$subsystem", 00:35:06.680 "trtype": "$TEST_TRANSPORT", 00:35:06.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.680 "adrfam": "ipv4", 00:35:06.680 "trsvcid": "$NVMF_PORT", 00:35:06.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.680 "hdgst": ${hdgst:-false}, 00:35:06.680 "ddgst": ${ddgst:-false} 00:35:06.680 }, 00:35:06.680 "method": "bdev_nvme_attach_controller" 00:35:06.680 } 00:35:06.680 EOF 00:35:06.680 )") 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:06.680 "params": { 00:35:06.680 "name": "Nvme0", 00:35:06.680 "trtype": "tcp", 00:35:06.680 "traddr": "10.0.0.2", 00:35:06.680 "adrfam": "ipv4", 00:35:06.680 "trsvcid": "4420", 00:35:06.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.680 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:06.680 "hdgst": false, 00:35:06.680 "ddgst": false 00:35:06.680 }, 00:35:06.680 "method": "bdev_nvme_attach_controller" 00:35:06.680 }' 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:06.680 19:08:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.939 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:06.939 fio-3.35 00:35:06.939 Starting 1 thread 00:35:06.939 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.185 00:35:19.185 filename0: (groupid=0, jobs=1): err= 0: pid=3701632: Thu Jul 25 19:08:29 2024 00:35:19.185 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:35:19.185 slat (nsec): min=4509, max=74107, avg=9477.83, stdev=3352.06 00:35:19.185 clat (usec): min=40877, max=45031, avg=40991.24, stdev=264.97 00:35:19.185 lat (usec): min=40885, max=45056, avg=41000.72, stdev=265.32 00:35:19.185 clat percentiles (usec): 00:35:19.185 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:19.185 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:19.185 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:19.185 | 99.00th=[41157], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:35:19.185 | 99.99th=[44827] 00:35:19.185 bw ( KiB/s): min= 384, max= 416, per=99.48%, avg=388.80, stdev=11.72, samples=20 00:35:19.185 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:19.185 lat (msec) : 50=100.00% 00:35:19.185 cpu : usr=89.89%, sys=9.84%, ctx=12, majf=0, minf=262 00:35:19.185 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.185 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.185 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:19.185 00:35:19.185 Run status group 0 (all jobs): 00:35:19.185 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10010-10010msec 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.185 00:35:19.185 real 0m11.096s 00:35:19.185 user 0m10.168s 00:35:19.185 sys 0m1.293s 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 ************************************ 00:35:19.185 END TEST fio_dif_1_default 00:35:19.185 ************************************ 00:35:19.185 19:08:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:19.185 19:08:29 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:19.185 19:08:29 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 ************************************ 00:35:19.185 START TEST fio_dif_1_multi_subsystems 00:35:19.185 ************************************ 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 bdev_null0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 [2024-07-25 19:08:29.561556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 bdev_null1 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.185 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:19.186 { 00:35:19.186 "params": { 00:35:19.186 "name": "Nvme$subsystem", 00:35:19.186 "trtype": "$TEST_TRANSPORT", 00:35:19.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.186 "adrfam": "ipv4", 00:35:19.186 "trsvcid": "$NVMF_PORT", 00:35:19.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.186 "hdgst": ${hdgst:-false}, 00:35:19.186 "ddgst": ${ddgst:-false} 00:35:19.186 }, 00:35:19.186 "method": "bdev_nvme_attach_controller" 00:35:19.186 } 00:35:19.186 EOF 00:35:19.186 )") 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:19.186 { 00:35:19.186 "params": { 00:35:19.186 "name": "Nvme$subsystem", 00:35:19.186 "trtype": "$TEST_TRANSPORT", 00:35:19.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.186 "adrfam": "ipv4", 00:35:19.186 "trsvcid": "$NVMF_PORT", 00:35:19.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.186 "hdgst": ${hdgst:-false}, 00:35:19.186 "ddgst": ${ddgst:-false} 00:35:19.186 }, 00:35:19.186 "method": "bdev_nvme_attach_controller" 00:35:19.186 } 00:35:19.186 EOF 00:35:19.186 )") 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:19.186 "params": { 00:35:19.186 "name": "Nvme0", 00:35:19.186 "trtype": "tcp", 00:35:19.186 "traddr": "10.0.0.2", 00:35:19.186 "adrfam": "ipv4", 00:35:19.186 "trsvcid": "4420", 00:35:19.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.186 "hdgst": false, 00:35:19.186 "ddgst": false 00:35:19.186 }, 00:35:19.186 "method": "bdev_nvme_attach_controller" 00:35:19.186 },{ 00:35:19.186 "params": { 00:35:19.186 "name": "Nvme1", 00:35:19.186 "trtype": "tcp", 00:35:19.186 "traddr": "10.0.0.2", 00:35:19.186 "adrfam": "ipv4", 00:35:19.186 "trsvcid": "4420", 00:35:19.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.186 "hdgst": false, 00:35:19.186 "ddgst": false 00:35:19.186 }, 00:35:19.186 "method": "bdev_nvme_attach_controller" 00:35:19.186 }' 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:19.186 19:08:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.186 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:19.186 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:19.186 fio-3.35 00:35:19.186 Starting 2 threads 00:35:19.186 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.150 00:35:29.150 filename0: (groupid=0, jobs=1): err= 0: pid=3703029: Thu Jul 25 19:08:40 2024 00:35:29.150 read: IOPS=96, BW=384KiB/s (394kB/s)(3856KiB/10029msec) 00:35:29.150 slat (nsec): min=7113, max=77578, avg=10592.49, stdev=4623.60 00:35:29.150 clat (usec): min=40734, max=43146, avg=41577.65, stdev=499.19 00:35:29.150 lat (usec): min=40741, max=43166, avg=41588.25, stdev=500.25 00:35:29.150 clat percentiles (usec): 00:35:29.150 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:29.150 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:29.150 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:29.150 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:35:29.150 | 99.99th=[43254] 00:35:29.150 bw ( KiB/s): min= 352, max= 416, per=33.62%, avg=384.00, stdev=14.68, samples=20 00:35:29.150 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:35:29.150 lat (msec) : 50=100.00% 00:35:29.150 cpu : usr=94.39%, sys=5.32%, ctx=20, majf=0, minf=79 00:35:29.150 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.150 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.150 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:29.150 filename1: (groupid=0, jobs=1): err= 0: pid=3703030: Thu Jul 25 19:08:40 2024 00:35:29.150 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:35:29.151 slat (nsec): min=7132, max=76009, avg=9600.13, stdev=3518.44 00:35:29.151 clat (usec): min=628, max=43119, avg=21026.14, stdev=20146.23 00:35:29.151 lat (usec): min=636, max=43161, avg=21035.74, stdev=20145.69 00:35:29.151 clat percentiles (usec): 00:35:29.151 | 1.00th=[ 644], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 734], 00:35:29.151 | 30.00th=[ 938], 40.00th=[ 988], 50.00th=[40633], 60.00th=[41157], 00:35:29.151 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:29.151 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:29.151 | 99.99th=[43254] 00:35:29.151 bw ( KiB/s): min= 704, max= 768, per=66.62%, avg=761.26, stdev=17.13, samples=19 00:35:29.151 iops : min= 176, max= 192, avg=190.32, stdev= 4.28, samples=19 00:35:29.151 lat (usec) : 750=20.74%, 1000=22.58% 00:35:29.151 lat (msec) : 2=6.58%, 50=50.11% 00:35:29.151 cpu : usr=94.36%, sys=5.34%, ctx=16, majf=0, minf=223 00:35:29.151 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.151 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.151 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:29.151 00:35:29.151 Run status group 0 (all jobs): 00:35:29.151 READ: bw=1142KiB/s (1170kB/s), 384KiB/s-760KiB/s (394kB/s-778kB/s), io=11.2MiB (11.7MB), run=10002-10029msec 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.151 00:35:29.151 real 0m11.413s 00:35:29.151 user 0m20.437s 00:35:29.151 sys 0m1.366s 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:29.151 19:08:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 ************************************ 00:35:29.151 END TEST fio_dif_1_multi_subsystems 00:35:29.151 ************************************ 00:35:29.151 19:08:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:29.151 19:08:40 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:29.151 19:08:40 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:29.151 19:08:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 ************************************ 00:35:29.151 START TEST fio_dif_rand_params 00:35:29.151 ************************************ 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 bdev_null0 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.151 19:08:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:29.151 [2024-07-25 19:08:41.016869] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:29.151 { 00:35:29.151 "params": { 00:35:29.151 "name": "Nvme$subsystem", 00:35:29.151 "trtype": "$TEST_TRANSPORT", 00:35:29.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.151 "adrfam": "ipv4", 00:35:29.151 "trsvcid": "$NVMF_PORT", 00:35:29.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.151 "hdgst": ${hdgst:-false}, 00:35:29.151 "ddgst": ${ddgst:-false} 00:35:29.151 }, 00:35:29.151 "method": "bdev_nvme_attach_controller" 00:35:29.151 } 00:35:29.151 EOF 00:35:29.151 )") 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:29.151 19:08:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:29.410 "params": { 00:35:29.410 "name": "Nvme0", 00:35:29.410 "trtype": "tcp", 00:35:29.410 "traddr": "10.0.0.2", 00:35:29.410 "adrfam": "ipv4", 00:35:29.410 "trsvcid": "4420", 00:35:29.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.410 "hdgst": false, 00:35:29.410 "ddgst": false 00:35:29.410 }, 00:35:29.410 "method": "bdev_nvme_attach_controller" 00:35:29.410 }' 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:29.410 19:08:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.410 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:29.410 ... 00:35:29.410 fio-3.35 00:35:29.410 Starting 3 threads 00:35:29.668 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.227 00:35:36.227 filename0: (groupid=0, jobs=1): err= 0: pid=3704425: Thu Jul 25 19:08:46 2024 00:35:36.227 read: IOPS=210, BW=26.4MiB/s (27.6MB/s)(132MiB/5007msec) 00:35:36.227 slat (nsec): min=5002, max=55467, avg=15008.69, stdev=4777.15 00:35:36.227 clat (usec): min=5182, max=89320, avg=14203.14, stdev=7891.58 00:35:36.227 lat (usec): min=5194, max=89334, avg=14218.15, stdev=7891.33 00:35:36.227 clat percentiles (usec): 00:35:36.227 | 1.00th=[ 5473], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10290], 00:35:36.227 | 30.00th=[11469], 40.00th=[12256], 50.00th=[12911], 60.00th=[13698], 00:35:36.227 | 70.00th=[14484], 80.00th=[15664], 90.00th=[17171], 95.00th=[18744], 00:35:36.227 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54264], 99.95th=[89654], 00:35:36.227 | 99.99th=[89654] 00:35:36.227 bw ( KiB/s): min=15872, max=32768, per=33.67%, avg=26956.80, stdev=6053.32, samples=10 00:35:36.227 iops : min= 124, max= 256, avg=210.60, stdev=47.29, samples=10 00:35:36.227 lat (msec) : 10=17.23%, 20=78.79%, 50=1.80%, 100=2.18% 00:35:36.227 cpu : usr=91.99%, sys=7.57%, ctx=13, majf=0, minf=88 00:35:36.227 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.227 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.227 filename0: (groupid=0, jobs=1): err= 0: pid=3704426: Thu Jul 25 19:08:46 2024 00:35:36.227 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(131MiB/5046msec) 00:35:36.227 slat (nsec): min=4952, max=66489, avg=17362.89, stdev=6548.69 00:35:36.227 clat (usec): min=5012, max=88780, avg=14409.14, stdev=8991.61 00:35:36.227 lat (usec): min=5028, max=88795, avg=14426.50, stdev=8991.25 00:35:36.227 clat percentiles (usec): 00:35:36.227 | 1.00th=[ 6063], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[10552], 00:35:36.227 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12649], 60.00th=[13173], 00:35:36.227 | 70.00th=[13960], 80.00th=[15008], 90.00th=[16450], 95.00th=[19268], 00:35:36.227 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56361], 99.95th=[88605], 00:35:36.227 | 99.99th=[88605] 00:35:36.227 bw ( KiB/s): min=20480, max=33024, per=33.35%, avg=26700.80, stdev=3910.56, samples=10 00:35:36.227 iops : min= 160, max= 258, avg=208.60, stdev=30.55, samples=10 00:35:36.227 lat (msec) : 10=17.40%, 20=77.63%, 50=1.91%, 100=3.06% 00:35:36.227 cpu : usr=91.81%, sys=7.18%, ctx=116, majf=0, minf=130 00:35:36.227 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.227 issued rwts: total=1046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.227 filename0: (groupid=0, jobs=1): err= 0: pid=3704427: Thu Jul 25 19:08:46 2024 00:35:36.227 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(132MiB/5046msec) 00:35:36.227 slat (usec): min=4, max=127, avg=16.17, stdev= 6.94 00:35:36.227 clat (usec): min=4896, max=55312, avg=14303.21, stdev=8541.31 00:35:36.227 lat (usec): min=4908, max=55326, avg=14319.39, stdev=8540.97 00:35:36.227 clat percentiles (usec): 00:35:36.227 | 1.00th=[ 5276], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[10552], 00:35:36.227 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[13435], 00:35:36.227 | 70.00th=[13960], 80.00th=[15008], 90.00th=[16712], 95.00th=[19006], 00:35:36.227 | 99.00th=[53216], 99.50th=[54264], 99.90th=[54789], 99.95th=[55313], 00:35:36.227 | 99.99th=[55313] 00:35:36.227 bw ( KiB/s): min=19200, max=30464, per=33.61%, avg=26911.10, stdev=3348.93, samples=10 00:35:36.227 iops : min= 150, max= 238, avg=210.20, stdev=26.15, samples=10 00:35:36.227 lat (msec) : 10=16.60%, 20=78.65%, 50=1.90%, 100=2.85% 00:35:36.227 cpu : usr=92.17%, sys=7.13%, ctx=50, majf=0, minf=135 00:35:36.227 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.227 issued rwts: total=1054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.227 00:35:36.227 Run status group 0 (all jobs): 00:35:36.227 READ: bw=78.2MiB/s (82.0MB/s), 25.9MiB/s-26.4MiB/s (27.2MB/s-27.6MB/s), io=395MiB (414MB), run=5007-5046msec 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:36.227 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 bdev_null0 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 [2024-07-25 19:08:47.166951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 bdev_null1 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 bdev_null2 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.228 { 00:35:36.228 "params": { 00:35:36.228 "name": "Nvme$subsystem", 00:35:36.228 "trtype": "$TEST_TRANSPORT", 00:35:36.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.228 "adrfam": "ipv4", 00:35:36.228 "trsvcid": "$NVMF_PORT", 00:35:36.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.228 "hdgst": ${hdgst:-false}, 00:35:36.228 "ddgst": ${ddgst:-false} 00:35:36.228 }, 00:35:36.228 "method": "bdev_nvme_attach_controller" 00:35:36.228 } 00:35:36.228 EOF 00:35:36.228 )") 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.228 { 00:35:36.228 "params": { 00:35:36.228 "name": "Nvme$subsystem", 00:35:36.228 "trtype": "$TEST_TRANSPORT", 00:35:36.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.228 "adrfam": "ipv4", 00:35:36.228 "trsvcid": "$NVMF_PORT", 00:35:36.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.228 "hdgst": ${hdgst:-false}, 00:35:36.228 "ddgst": ${ddgst:-false} 00:35:36.228 }, 00:35:36.228 "method": "bdev_nvme_attach_controller" 00:35:36.228 } 00:35:36.228 EOF 00:35:36.228 )") 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:36.228 { 00:35:36.228 "params": { 00:35:36.228 "name": "Nvme$subsystem", 00:35:36.228 "trtype": "$TEST_TRANSPORT", 00:35:36.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.228 "adrfam": "ipv4", 00:35:36.228 "trsvcid": "$NVMF_PORT", 00:35:36.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.228 "hdgst": ${hdgst:-false}, 00:35:36.228 "ddgst": ${ddgst:-false} 00:35:36.228 }, 00:35:36.228 "method": "bdev_nvme_attach_controller" 00:35:36.228 } 00:35:36.228 EOF 00:35:36.228 )") 00:35:36.228 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:36.229 "params": { 00:35:36.229 "name": "Nvme0", 00:35:36.229 "trtype": "tcp", 00:35:36.229 "traddr": "10.0.0.2", 00:35:36.229 "adrfam": "ipv4", 00:35:36.229 "trsvcid": "4420", 00:35:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.229 "hdgst": false, 00:35:36.229 "ddgst": false 00:35:36.229 }, 00:35:36.229 "method": "bdev_nvme_attach_controller" 00:35:36.229 },{ 00:35:36.229 "params": { 00:35:36.229 "name": "Nvme1", 00:35:36.229 "trtype": "tcp", 00:35:36.229 "traddr": "10.0.0.2", 00:35:36.229 "adrfam": "ipv4", 00:35:36.229 "trsvcid": "4420", 00:35:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.229 "hdgst": false, 00:35:36.229 "ddgst": false 00:35:36.229 }, 00:35:36.229 "method": "bdev_nvme_attach_controller" 00:35:36.229 },{ 00:35:36.229 "params": { 00:35:36.229 "name": "Nvme2", 00:35:36.229 "trtype": "tcp", 00:35:36.229 "traddr": "10.0.0.2", 00:35:36.229 "adrfam": "ipv4", 00:35:36.229 "trsvcid": "4420", 00:35:36.229 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:36.229 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:36.229 "hdgst": false, 00:35:36.229 "ddgst": false 00:35:36.229 }, 00:35:36.229 "method": "bdev_nvme_attach_controller" 00:35:36.229 }' 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:36.229 19:08:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.229 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.229 ... 00:35:36.229 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.229 ... 00:35:36.229 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:36.229 ... 00:35:36.229 fio-3.35 00:35:36.229 Starting 24 threads 00:35:36.229 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.454 00:35:48.454 filename0: (groupid=0, jobs=1): err= 0: pid=3705285: Thu Jul 25 19:08:58 2024 00:35:48.454 read: IOPS=424, BW=1699KiB/s (1740kB/s)(16.8MiB/10093msec) 00:35:48.454 slat (usec): min=7, max=123, avg=35.35, stdev=19.53 00:35:48.454 clat (msec): min=15, max=385, avg=37.23, stdev=31.78 00:35:48.454 lat (msec): min=15, max=385, avg=37.26, stdev=31.77 00:35:48.454 clat percentiles (msec): 00:35:48.454 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.454 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.454 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.454 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.454 | 99.99th=[ 388] 00:35:48.454 bw ( KiB/s): min= 144, max= 2052, per=4.21%, avg=1709.00, stdev=557.55, samples=20 00:35:48.454 iops : min= 36, max= 513, avg=427.25, stdev=139.39, samples=20 00:35:48.454 lat (msec) : 20=0.75%, 50=97.39%, 250=0.42%, 500=1.45% 00:35:48.454 cpu : usr=96.08%, sys=2.54%, ctx=146, majf=0, minf=31 00:35:48.454 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.454 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.454 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.455 filename0: (groupid=0, jobs=1): err= 0: pid=3705286: Thu Jul 25 19:08:58 2024 00:35:48.455 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10012msec) 00:35:48.455 slat (usec): min=7, max=130, avg=43.91, stdev=23.23 00:35:48.455 clat (msec): min=15, max=388, avg=37.56, stdev=36.74 00:35:48.455 lat (msec): min=15, max=388, avg=37.60, stdev=36.74 00:35:48.455 clat percentiles (msec): 00:35:48.455 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.455 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.455 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 36], 00:35:48.455 | 99.00th=[ 288], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:35:48.455 | 99.99th=[ 388] 00:35:48.455 bw ( KiB/s): min= 128, max= 1936, per=4.15%, avg=1683.20, stdev=571.60, samples=20 00:35:48.455 iops : min= 32, max= 484, avg=420.80, stdev=142.90, samples=20 00:35:48.455 lat (msec) : 20=2.46%, 50=94.96%, 100=1.02%, 250=0.05%, 500=1.52% 00:35:48.455 cpu : usr=97.40%, sys=1.65%, ctx=107, majf=0, minf=34 00:35:48.455 IO depths : 1=4.2%, 2=10.4%, 4=24.7%, 8=52.4%, 16=8.3%, 32=0.0%, >=64=0.0% 00:35:48.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.455 filename0: (groupid=0, jobs=1): err= 0: pid=3705287: Thu Jul 25 19:08:58 2024 00:35:48.455 read: IOPS=421, BW=1686KiB/s (1726kB/s)(16.6MiB/10076msec) 00:35:48.455 slat (usec): min=8, max=111, avg=42.68, stdev=15.24 00:35:48.455 clat (msec): min=27, max=395, avg=37.55, stdev=33.56 00:35:48.455 lat (msec): min=27, max=395, avg=37.59, stdev=33.56 00:35:48.455 clat percentiles (msec): 00:35:48.455 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.455 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:35:48.455 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.455 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 397], 99.95th=[ 397], 00:35:48.455 | 99.99th=[ 397] 00:35:48.455 bw ( KiB/s): min= 192, max= 2048, per=4.17%, avg=1692.00, stdev=557.62, samples=20 00:35:48.455 iops : min= 48, max= 512, avg=423.00, stdev=139.41, samples=20 00:35:48.455 lat (msec) : 50=97.60%, 100=0.61%, 250=0.47%, 500=1.32% 00:35:48.455 cpu : usr=97.20%, sys=1.72%, ctx=108, majf=0, minf=27 00:35:48.455 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 issued rwts: total=4246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.455 filename0: (groupid=0, jobs=1): err= 0: pid=3705288: Thu Jul 25 19:08:58 2024 00:35:48.455 read: IOPS=424, BW=1698KiB/s (1739kB/s)(16.8MiB/10102msec) 00:35:48.455 slat (usec): min=8, max=104, avg=39.07, stdev=12.18 00:35:48.455 clat (msec): min=15, max=396, avg=37.17, stdev=31.85 00:35:48.455 lat (msec): min=15, max=396, avg=37.21, stdev=31.85 00:35:48.455 clat percentiles (msec): 00:35:48.455 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.455 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.455 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.455 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.455 | 99.99th=[ 397] 00:35:48.455 bw ( KiB/s): min= 144, max= 2052, per=4.21%, avg=1709.00, stdev=557.55, samples=20 00:35:48.455 iops : min= 36, max= 513, avg=427.25, stdev=139.39, samples=20 00:35:48.455 lat (msec) : 20=0.75%, 50=97.39%, 250=0.42%, 500=1.45% 00:35:48.455 cpu : usr=97.91%, sys=1.69%, ctx=30, majf=0, minf=22 00:35:48.455 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.455 filename0: (groupid=0, jobs=1): err= 0: pid=3705289: Thu Jul 25 19:08:58 2024 00:35:48.455 read: IOPS=420, BW=1684KiB/s (1724kB/s)(16.6MiB/10112msec) 00:35:48.455 slat (nsec): min=8341, max=94755, avg=30608.15, stdev=13790.26 00:35:48.455 clat (msec): min=31, max=343, avg=37.60, stdev=31.03 00:35:48.455 lat (msec): min=31, max=344, avg=37.63, stdev=31.02 00:35:48.455 clat percentiles (msec): 00:35:48.455 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.455 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.455 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.455 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 288], 00:35:48.455 | 99.99th=[ 347] 00:35:48.455 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1696.00, stdev=542.86, samples=20 00:35:48.455 iops : min= 64, max= 480, avg=424.00, stdev=135.71, samples=20 00:35:48.455 lat (msec) : 50=97.37%, 100=0.38%, 250=0.75%, 500=1.50% 00:35:48.455 cpu : usr=95.15%, sys=2.89%, ctx=211, majf=0, minf=32 00:35:48.455 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.455 filename0: (groupid=0, jobs=1): err= 0: pid=3705290: Thu Jul 25 19:08:58 2024 00:35:48.455 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.6MiB/10079msec) 00:35:48.455 slat (usec): min=8, max=121, avg=40.49, stdev=25.05 00:35:48.455 clat (msec): min=23, max=338, avg=37.51, stdev=30.99 00:35:48.455 lat (msec): min=23, max=338, avg=37.56, stdev=30.98 00:35:48.455 clat percentiles (msec): 00:35:48.455 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.455 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.455 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.455 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 288], 00:35:48.455 | 99.99th=[ 338] 00:35:48.455 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1696.00, stdev=542.86, samples=20 00:35:48.455 iops : min= 64, max= 480, avg=424.00, stdev=135.71, samples=20 00:35:48.455 lat (msec) : 50=97.37%, 100=0.38%, 250=0.75%, 500=1.50% 00:35:48.455 cpu : usr=98.04%, sys=1.55%, ctx=19, majf=0, minf=39 00:35:48.455 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.455 filename0: (groupid=0, jobs=1): err= 0: pid=3705291: Thu Jul 25 19:08:58 2024 00:35:48.455 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.6MiB/10079msec) 00:35:48.455 slat (nsec): min=8082, max=60093, avg=27409.63, stdev=9801.05 00:35:48.455 clat (msec): min=31, max=357, avg=37.59, stdev=31.02 00:35:48.455 lat (msec): min=31, max=357, avg=37.62, stdev=31.01 00:35:48.455 clat percentiles (msec): 00:35:48.455 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.455 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.455 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.455 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 288], 00:35:48.455 | 99.99th=[ 359] 00:35:48.455 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1696.00, stdev=542.86, samples=20 00:35:48.455 iops : min= 64, max= 480, avg=424.00, stdev=135.71, samples=20 00:35:48.455 lat (msec) : 50=97.37%, 100=0.42%, 250=0.70%, 500=1.50% 00:35:48.455 cpu : usr=96.79%, sys=2.15%, ctx=178, majf=0, minf=28 00:35:48.455 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.455 filename0: (groupid=0, jobs=1): err= 0: pid=3705292: Thu Jul 25 19:08:58 2024 00:35:48.455 read: IOPS=421, BW=1686KiB/s (1727kB/s)(16.6MiB/10057msec) 00:35:48.455 slat (nsec): min=8215, max=92974, avg=37525.15, stdev=12513.56 00:35:48.455 clat (msec): min=26, max=344, avg=37.45, stdev=31.88 00:35:48.455 lat (msec): min=26, max=344, avg=37.48, stdev=31.87 00:35:48.455 clat percentiles (msec): 00:35:48.455 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.455 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:35:48.455 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.455 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.455 | 99.99th=[ 347] 00:35:48.455 bw ( KiB/s): min= 144, max= 1920, per=4.16%, avg=1689.60, stdev=561.01, samples=20 00:35:48.455 iops : min= 36, max= 480, avg=422.40, stdev=140.25, samples=20 00:35:48.455 lat (msec) : 50=97.74%, 100=0.38%, 250=0.42%, 500=1.46% 00:35:48.455 cpu : usr=98.17%, sys=1.42%, ctx=16, majf=0, minf=28 00:35:48.455 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.455 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.455 filename1: (groupid=0, jobs=1): err= 0: pid=3705293: Thu Jul 25 19:08:58 2024 00:35:48.455 read: IOPS=421, BW=1686KiB/s (1726kB/s)(16.6MiB/10062msec) 00:35:48.455 slat (nsec): min=8309, max=67469, avg=32196.55, stdev=9652.34 00:35:48.456 clat (msec): min=31, max=494, avg=37.65, stdev=34.42 00:35:48.456 lat (msec): min=31, max=494, avg=37.68, stdev=34.42 00:35:48.456 clat percentiles (msec): 00:35:48.456 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.456 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.456 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.456 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 376], 99.95th=[ 376], 00:35:48.456 | 99.99th=[ 493] 00:35:48.456 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1689.75, stdev=553.47, samples=20 00:35:48.456 iops : min= 64, max= 480, avg=422.40, stdev=138.36, samples=20 00:35:48.456 lat (msec) : 50=97.74%, 100=0.42%, 250=0.33%, 500=1.51% 00:35:48.456 cpu : usr=95.84%, sys=2.42%, ctx=215, majf=0, minf=32 00:35:48.456 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.456 filename1: (groupid=0, jobs=1): err= 0: pid=3705294: Thu Jul 25 19:08:58 2024 00:35:48.456 read: IOPS=439, BW=1760KiB/s (1802kB/s)(17.3MiB/10063msec) 00:35:48.456 slat (usec): min=7, max=149, avg=27.38, stdev=24.29 00:35:48.456 clat (msec): min=13, max=459, avg=36.19, stdev=37.22 00:35:48.456 lat (msec): min=13, max=459, avg=36.21, stdev=37.22 00:35:48.456 clat percentiles (msec): 00:35:48.456 | 1.00th=[ 20], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 30], 00:35:48.456 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.456 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 38], 00:35:48.456 | 99.00th=[ 275], 99.50th=[ 388], 99.90th=[ 439], 99.95th=[ 460], 00:35:48.456 | 99.99th=[ 460] 00:35:48.456 bw ( KiB/s): min= 128, max= 2192, per=4.35%, avg=1764.40, stdev=610.04, samples=20 00:35:48.456 iops : min= 32, max= 548, avg=441.10, stdev=152.51, samples=20 00:35:48.456 lat (msec) : 20=1.81%, 50=96.16%, 100=0.36%, 250=0.54%, 500=1.13% 00:35:48.456 cpu : usr=98.19%, sys=1.36%, ctx=17, majf=0, minf=32 00:35:48.456 IO depths : 1=0.2%, 2=0.7%, 4=3.3%, 8=79.2%, 16=16.6%, 32=0.0%, >=64=0.0% 00:35:48.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 complete : 0=0.0%, 4=89.6%, 8=8.8%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 issued rwts: total=4427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.456 filename1: (groupid=0, jobs=1): err= 0: pid=3705295: Thu Jul 25 19:08:58 2024 00:35:48.456 read: IOPS=421, BW=1686KiB/s (1726kB/s)(16.6MiB/10075msec) 00:35:48.456 slat (usec): min=6, max=114, avg=43.28, stdev=18.08 00:35:48.456 clat (msec): min=31, max=419, avg=37.52, stdev=33.14 00:35:48.456 lat (msec): min=31, max=419, avg=37.57, stdev=33.13 00:35:48.456 clat percentiles (msec): 00:35:48.456 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.456 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:35:48.456 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.456 | 99.00th=[ 259], 99.50th=[ 275], 99.90th=[ 422], 99.95th=[ 422], 00:35:48.456 | 99.99th=[ 422] 00:35:48.456 bw ( KiB/s): min= 192, max= 2048, per=4.17%, avg=1692.00, stdev=555.66, samples=20 00:35:48.456 iops : min= 48, max= 512, avg=423.00, stdev=138.92, samples=20 00:35:48.456 lat (msec) : 50=97.60%, 100=0.52%, 250=0.75%, 500=1.13% 00:35:48.456 cpu : usr=96.50%, sys=2.16%, ctx=141, majf=0, minf=30 00:35:48.456 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 issued rwts: total=4246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.456 filename1: (groupid=0, jobs=1): err= 0: pid=3705296: Thu Jul 25 19:08:58 2024 00:35:48.456 read: IOPS=430, BW=1722KiB/s (1763kB/s)(16.9MiB/10055msec) 00:35:48.456 slat (usec): min=7, max=132, avg=36.65, stdev=28.05 00:35:48.456 clat (msec): min=13, max=436, avg=36.95, stdev=38.81 00:35:48.456 lat (msec): min=13, max=436, avg=36.99, stdev=38.81 00:35:48.456 clat percentiles (msec): 00:35:48.456 | 1.00th=[ 20], 5.00th=[ 23], 10.00th=[ 28], 20.00th=[ 33], 00:35:48.456 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.456 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 42], 00:35:48.456 | 99.00th=[ 313], 99.50th=[ 388], 99.90th=[ 439], 99.95th=[ 439], 00:35:48.456 | 99.99th=[ 439] 00:35:48.456 bw ( KiB/s): min= 128, max= 2064, per=4.25%, avg=1724.95, stdev=600.73, samples=20 00:35:48.456 iops : min= 32, max= 516, avg=431.20, stdev=150.19, samples=20 00:35:48.456 lat (msec) : 20=1.41%, 50=96.74%, 100=0.37%, 250=0.32%, 500=1.16% 00:35:48.456 cpu : usr=98.17%, sys=1.41%, ctx=15, majf=0, minf=36 00:35:48.456 IO depths : 1=1.6%, 2=3.7%, 4=9.5%, 8=71.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:35:48.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 complete : 0=0.0%, 4=90.8%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 issued rwts: total=4328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.456 filename1: (groupid=0, jobs=1): err= 0: pid=3705297: Thu Jul 25 19:08:58 2024 00:35:48.456 read: IOPS=424, BW=1699KiB/s (1740kB/s)(16.8MiB/10094msec) 00:35:48.456 slat (usec): min=8, max=107, avg=34.06, stdev=15.14 00:35:48.456 clat (msec): min=15, max=363, avg=37.24, stdev=31.74 00:35:48.456 lat (msec): min=15, max=363, avg=37.27, stdev=31.73 00:35:48.456 clat percentiles (msec): 00:35:48.456 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.456 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.456 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.456 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.456 | 99.99th=[ 363] 00:35:48.456 bw ( KiB/s): min= 144, max= 2048, per=4.21%, avg=1708.80, stdev=557.42, samples=20 00:35:48.456 iops : min= 36, max= 512, avg=427.20, stdev=139.36, samples=20 00:35:48.456 lat (msec) : 20=0.75%, 50=97.39%, 250=0.42%, 500=1.45% 00:35:48.456 cpu : usr=96.78%, sys=1.97%, ctx=155, majf=0, minf=42 00:35:48.456 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.456 filename1: (groupid=0, jobs=1): err= 0: pid=3705298: Thu Jul 25 19:08:58 2024 00:35:48.456 read: IOPS=438, BW=1752KiB/s (1794kB/s)(17.2MiB/10062msec) 00:35:48.456 slat (usec): min=8, max=102, avg=21.75, stdev=17.01 00:35:48.456 clat (msec): min=15, max=465, avg=36.29, stdev=33.44 00:35:48.456 lat (msec): min=15, max=465, avg=36.31, stdev=33.44 00:35:48.456 clat percentiles (msec): 00:35:48.456 | 1.00th=[ 21], 5.00th=[ 23], 10.00th=[ 26], 20.00th=[ 29], 00:35:48.456 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.456 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 42], 00:35:48.456 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 347], 99.95th=[ 468], 00:35:48.456 | 99.99th=[ 468] 00:35:48.456 bw ( KiB/s): min= 176, max= 2208, per=4.33%, avg=1756.95, stdev=602.73, samples=20 00:35:48.456 iops : min= 44, max= 552, avg=439.20, stdev=150.69, samples=20 00:35:48.456 lat (msec) : 20=1.00%, 50=96.69%, 100=0.59%, 250=0.27%, 500=1.45% 00:35:48.456 cpu : usr=98.31%, sys=1.26%, ctx=27, majf=0, minf=36 00:35:48.456 IO depths : 1=0.1%, 2=1.0%, 4=5.9%, 8=77.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:35:48.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 complete : 0=0.0%, 4=89.8%, 8=7.7%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 issued rwts: total=4408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.456 filename1: (groupid=0, jobs=1): err= 0: pid=3705299: Thu Jul 25 19:08:58 2024 00:35:48.456 read: IOPS=422, BW=1690KiB/s (1730kB/s)(16.6MiB/10071msec) 00:35:48.456 slat (usec): min=4, max=124, avg=34.55, stdev=15.55 00:35:48.456 clat (msec): min=18, max=374, avg=37.55, stdev=34.07 00:35:48.456 lat (msec): min=18, max=374, avg=37.58, stdev=34.07 00:35:48.456 clat percentiles (msec): 00:35:48.456 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.456 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.456 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.456 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 376], 99.95th=[ 376], 00:35:48.456 | 99.99th=[ 376] 00:35:48.456 bw ( KiB/s): min= 256, max= 2048, per=4.18%, avg=1696.00, stdev=546.03, samples=20 00:35:48.456 iops : min= 64, max= 512, avg=424.00, stdev=136.51, samples=20 00:35:48.456 lat (msec) : 20=0.05%, 50=98.07%, 100=0.05%, 250=0.33%, 500=1.50% 00:35:48.456 cpu : usr=97.00%, sys=2.01%, ctx=91, majf=0, minf=35 00:35:48.456 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.456 issued rwts: total=4254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.456 filename1: (groupid=0, jobs=1): err= 0: pid=3705300: Thu Jul 25 19:08:58 2024 00:35:48.456 read: IOPS=424, BW=1699KiB/s (1740kB/s)(16.8MiB/10094msec) 00:35:48.456 slat (usec): min=4, max=111, avg=37.06, stdev=14.92 00:35:48.456 clat (msec): min=14, max=327, avg=37.20, stdev=31.70 00:35:48.456 lat (msec): min=15, max=327, avg=37.24, stdev=31.70 00:35:48.456 clat percentiles (msec): 00:35:48.456 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.456 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.456 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.456 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.456 | 99.99th=[ 330] 00:35:48.457 bw ( KiB/s): min= 144, max= 2048, per=4.21%, avg=1708.80, stdev=556.26, samples=20 00:35:48.457 iops : min= 36, max= 512, avg=427.20, stdev=139.07, samples=20 00:35:48.457 lat (msec) : 20=0.75%, 50=97.39%, 250=0.37%, 500=1.49% 00:35:48.457 cpu : usr=97.82%, sys=1.47%, ctx=136, majf=0, minf=34 00:35:48.457 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.457 filename2: (groupid=0, jobs=1): err= 0: pid=3705301: Thu Jul 25 19:08:58 2024 00:35:48.457 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.8MiB/10048msec) 00:35:48.457 slat (usec): min=5, max=145, avg=49.77, stdev=22.13 00:35:48.457 clat (msec): min=14, max=292, avg=37.05, stdev=31.55 00:35:48.457 lat (msec): min=14, max=292, avg=37.10, stdev=31.55 00:35:48.457 clat percentiles (msec): 00:35:48.457 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.457 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:35:48.457 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.457 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.457 | 99.99th=[ 292] 00:35:48.457 bw ( KiB/s): min= 128, max= 2048, per=4.21%, avg=1708.80, stdev=557.62, samples=20 00:35:48.457 iops : min= 32, max= 512, avg=427.20, stdev=139.40, samples=20 00:35:48.457 lat (msec) : 20=0.70%, 50=97.43%, 250=0.37%, 500=1.49% 00:35:48.457 cpu : usr=98.25%, sys=1.33%, ctx=15, majf=0, minf=24 00:35:48.457 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.457 filename2: (groupid=0, jobs=1): err= 0: pid=3705302: Thu Jul 25 19:08:58 2024 00:35:48.457 read: IOPS=421, BW=1684KiB/s (1724kB/s)(16.6MiB/10071msec) 00:35:48.457 slat (usec): min=8, max=107, avg=33.23, stdev=19.10 00:35:48.457 clat (msec): min=31, max=547, avg=37.56, stdev=34.70 00:35:48.457 lat (msec): min=31, max=548, avg=37.59, stdev=34.70 00:35:48.457 clat percentiles (msec): 00:35:48.457 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.457 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.457 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.457 | 99.00th=[ 268], 99.50th=[ 292], 99.90th=[ 397], 99.95th=[ 397], 00:35:48.457 | 99.99th=[ 550] 00:35:48.457 bw ( KiB/s): min= 144, max= 1920, per=4.16%, avg=1689.60, stdev=561.01, samples=20 00:35:48.457 iops : min= 36, max= 480, avg=422.40, stdev=140.25, samples=20 00:35:48.457 lat (msec) : 50=97.74%, 100=0.38%, 250=0.42%, 500=1.42%, 750=0.05% 00:35:48.457 cpu : usr=98.05%, sys=1.45%, ctx=42, majf=0, minf=33 00:35:48.457 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.457 filename2: (groupid=0, jobs=1): err= 0: pid=3705303: Thu Jul 25 19:08:58 2024 00:35:48.457 read: IOPS=419, BW=1679KiB/s (1719kB/s)(16.5MiB/10064msec) 00:35:48.457 slat (usec): min=8, max=122, avg=40.83, stdev=18.78 00:35:48.457 clat (msec): min=30, max=554, avg=37.75, stdev=40.11 00:35:48.457 lat (msec): min=30, max=554, avg=37.79, stdev=40.11 00:35:48.457 clat percentiles (msec): 00:35:48.457 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.457 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.457 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.457 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 439], 99.95th=[ 439], 00:35:48.457 | 99.99th=[ 558] 00:35:48.457 bw ( KiB/s): min= 144, max= 1920, per=4.15%, avg=1683.20, stdev=571.18, samples=20 00:35:48.457 iops : min= 36, max= 480, avg=420.80, stdev=142.79, samples=20 00:35:48.457 lat (msec) : 50=98.11%, 100=0.38%, 250=0.38%, 500=1.09%, 750=0.05% 00:35:48.457 cpu : usr=98.36%, sys=1.23%, ctx=17, majf=0, minf=37 00:35:48.457 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.457 filename2: (groupid=0, jobs=1): err= 0: pid=3705304: Thu Jul 25 19:08:58 2024 00:35:48.457 read: IOPS=421, BW=1685KiB/s (1725kB/s)(16.6MiB/10065msec) 00:35:48.457 slat (nsec): min=8140, max=97593, avg=35618.75, stdev=16003.49 00:35:48.457 clat (msec): min=30, max=374, avg=37.65, stdev=34.22 00:35:48.457 lat (msec): min=30, max=374, avg=37.69, stdev=34.21 00:35:48.457 clat percentiles (msec): 00:35:48.457 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.457 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.457 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.457 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 376], 99.95th=[ 376], 00:35:48.457 | 99.99th=[ 376] 00:35:48.457 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1689.60, stdev=553.44, samples=20 00:35:48.457 iops : min= 64, max= 480, avg=422.40, stdev=138.36, samples=20 00:35:48.457 lat (msec) : 50=97.74%, 100=0.42%, 250=0.33%, 500=1.51% 00:35:48.457 cpu : usr=98.37%, sys=1.23%, ctx=11, majf=0, minf=27 00:35:48.457 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.457 filename2: (groupid=0, jobs=1): err= 0: pid=3705305: Thu Jul 25 19:08:58 2024 00:35:48.457 read: IOPS=423, BW=1692KiB/s (1733kB/s)(16.6MiB/10021msec) 00:35:48.457 slat (nsec): min=8361, max=77209, avg=36755.47, stdev=10323.38 00:35:48.457 clat (msec): min=24, max=292, avg=37.48, stdev=31.71 00:35:48.457 lat (msec): min=24, max=292, avg=37.51, stdev=31.71 00:35:48.457 clat percentiles (msec): 00:35:48.457 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.457 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.457 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.457 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.457 | 99.99th=[ 292] 00:35:48.457 bw ( KiB/s): min= 128, max= 2048, per=4.16%, avg=1689.60, stdev=562.71, samples=20 00:35:48.457 iops : min= 32, max= 512, avg=422.40, stdev=140.68, samples=20 00:35:48.457 lat (msec) : 50=97.74%, 100=0.38%, 250=0.38%, 500=1.51% 00:35:48.457 cpu : usr=96.14%, sys=2.44%, ctx=161, majf=0, minf=28 00:35:48.457 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.457 filename2: (groupid=0, jobs=1): err= 0: pid=3705306: Thu Jul 25 19:08:58 2024 00:35:48.457 read: IOPS=424, BW=1699KiB/s (1740kB/s)(16.8MiB/10095msec) 00:35:48.457 slat (usec): min=6, max=129, avg=37.38, stdev=15.68 00:35:48.457 clat (msec): min=15, max=396, avg=37.18, stdev=31.83 00:35:48.457 lat (msec): min=15, max=396, avg=37.22, stdev=31.83 00:35:48.457 clat percentiles (msec): 00:35:48.457 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.457 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:35:48.457 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.457 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.457 | 99.99th=[ 397] 00:35:48.457 bw ( KiB/s): min= 128, max= 2048, per=4.21%, avg=1708.80, stdev=557.59, samples=20 00:35:48.457 iops : min= 32, max= 512, avg=427.20, stdev=139.40, samples=20 00:35:48.457 lat (msec) : 20=0.75%, 50=97.39%, 250=0.42%, 500=1.45% 00:35:48.457 cpu : usr=96.79%, sys=2.16%, ctx=141, majf=0, minf=32 00:35:48.457 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.457 filename2: (groupid=0, jobs=1): err= 0: pid=3705307: Thu Jul 25 19:08:58 2024 00:35:48.457 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.6MiB/10079msec) 00:35:48.457 slat (nsec): min=8147, max=98890, avg=25928.80, stdev=16453.15 00:35:48.457 clat (msec): min=18, max=476, avg=37.66, stdev=34.23 00:35:48.457 lat (msec): min=18, max=476, avg=37.68, stdev=34.22 00:35:48.457 clat percentiles (msec): 00:35:48.457 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.457 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:48.457 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.457 | 99.00th=[ 268], 99.50th=[ 288], 99.90th=[ 376], 99.95th=[ 376], 00:35:48.457 | 99.99th=[ 477] 00:35:48.457 bw ( KiB/s): min= 256, max= 2048, per=4.18%, avg=1696.00, stdev=546.03, samples=20 00:35:48.457 iops : min= 64, max= 512, avg=424.00, stdev=136.51, samples=20 00:35:48.457 lat (msec) : 20=0.21%, 50=97.70%, 100=0.26%, 250=0.33%, 500=1.50% 00:35:48.457 cpu : usr=98.51%, sys=1.09%, ctx=12, majf=0, minf=33 00:35:48.457 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.457 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.457 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.458 filename2: (groupid=0, jobs=1): err= 0: pid=3705308: Thu Jul 25 19:08:58 2024 00:35:48.458 read: IOPS=426, BW=1705KiB/s (1746kB/s)(16.8MiB/10098msec) 00:35:48.458 slat (usec): min=4, max=121, avg=34.99, stdev=16.59 00:35:48.458 clat (msec): min=6, max=368, avg=37.07, stdev=31.74 00:35:48.458 lat (msec): min=6, max=368, avg=37.11, stdev=31.74 00:35:48.458 clat percentiles (msec): 00:35:48.458 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:48.458 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:35:48.458 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:48.458 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:35:48.458 | 99.99th=[ 368] 00:35:48.458 bw ( KiB/s): min= 144, max= 2176, per=4.23%, avg=1715.20, stdev=562.24, samples=20 00:35:48.458 iops : min= 36, max= 544, avg=428.80, stdev=140.56, samples=20 00:35:48.458 lat (msec) : 10=0.37%, 20=0.74%, 50=97.03%, 250=0.42%, 500=1.44% 00:35:48.458 cpu : usr=92.92%, sys=3.92%, ctx=398, majf=0, minf=54 00:35:48.458 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:48.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.458 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.458 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:48.458 00:35:48.458 Run status group 0 (all jobs): 00:35:48.458 READ: bw=39.6MiB/s (41.6MB/s), 1679KiB/s-1760KiB/s (1719kB/s-1802kB/s), io=401MiB (420MB), run=10012-10112msec 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 bdev_null0 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 [2024-07-25 19:08:58.834395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 bdev_null1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:48.458 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.458 { 00:35:48.458 "params": { 00:35:48.458 "name": "Nvme$subsystem", 00:35:48.458 "trtype": "$TEST_TRANSPORT", 00:35:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.458 "adrfam": "ipv4", 00:35:48.459 "trsvcid": "$NVMF_PORT", 00:35:48.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.459 "hdgst": ${hdgst:-false}, 00:35:48.459 "ddgst": ${ddgst:-false} 00:35:48.459 }, 00:35:48.459 "method": "bdev_nvme_attach_controller" 00:35:48.459 } 00:35:48.459 EOF 00:35:48.459 )") 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.459 { 00:35:48.459 "params": { 00:35:48.459 "name": "Nvme$subsystem", 00:35:48.459 "trtype": "$TEST_TRANSPORT", 00:35:48.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.459 "adrfam": "ipv4", 00:35:48.459 "trsvcid": "$NVMF_PORT", 00:35:48.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.459 "hdgst": ${hdgst:-false}, 00:35:48.459 "ddgst": ${ddgst:-false} 00:35:48.459 }, 00:35:48.459 "method": "bdev_nvme_attach_controller" 00:35:48.459 } 00:35:48.459 EOF 00:35:48.459 )") 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:48.459 "params": { 00:35:48.459 "name": "Nvme0", 00:35:48.459 "trtype": "tcp", 00:35:48.459 "traddr": "10.0.0.2", 00:35:48.459 "adrfam": "ipv4", 00:35:48.459 "trsvcid": "4420", 00:35:48.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.459 "hdgst": false, 00:35:48.459 "ddgst": false 00:35:48.459 }, 00:35:48.459 "method": "bdev_nvme_attach_controller" 00:35:48.459 },{ 00:35:48.459 "params": { 00:35:48.459 "name": "Nvme1", 00:35:48.459 "trtype": "tcp", 00:35:48.459 "traddr": "10.0.0.2", 00:35:48.459 "adrfam": "ipv4", 00:35:48.459 "trsvcid": "4420", 00:35:48.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:48.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:48.459 "hdgst": false, 00:35:48.459 "ddgst": false 00:35:48.459 }, 00:35:48.459 "method": "bdev_nvme_attach_controller" 00:35:48.459 }' 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.459 19:08:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.459 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:48.459 ... 00:35:48.459 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:48.459 ... 00:35:48.459 fio-3.35 00:35:48.459 Starting 4 threads 00:35:48.459 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.722 00:35:53.722 filename0: (groupid=0, jobs=1): err= 0: pid=3706685: Thu Jul 25 19:09:04 2024 00:35:53.722 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5004msec) 00:35:53.722 slat (nsec): min=4941, max=76261, avg=13451.37, stdev=6952.60 00:35:53.722 clat (usec): min=897, max=8172, avg=4177.30, stdev=661.32 00:35:53.722 lat (usec): min=910, max=8184, avg=4190.75, stdev=661.61 00:35:53.722 clat percentiles (usec): 00:35:53.722 | 1.00th=[ 2704], 5.00th=[ 3195], 10.00th=[ 3458], 20.00th=[ 3752], 00:35:53.722 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:35:53.722 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5407], 00:35:53.722 | 99.00th=[ 6652], 99.50th=[ 7177], 99.90th=[ 7635], 99.95th=[ 8029], 00:35:53.722 | 99.99th=[ 8160] 00:35:53.723 bw ( KiB/s): min=14672, max=16176, per=25.62%, avg=15166.10, stdev=522.26, samples=10 00:35:53.723 iops : min= 1834, max= 2022, avg=1895.70, stdev=65.32, samples=10 00:35:53.723 lat (usec) : 1000=0.01% 00:35:53.723 lat (msec) : 2=0.11%, 4=32.02%, 10=67.87% 00:35:53.723 cpu : usr=91.78%, sys=7.38%, ctx=44, majf=0, minf=10 00:35:53.723 IO depths : 1=0.1%, 2=6.7%, 4=65.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.723 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.723 issued rwts: total=9485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.723 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:53.723 filename0: (groupid=0, jobs=1): err= 0: pid=3706686: Thu Jul 25 19:09:04 2024 00:35:53.723 read: IOPS=1819, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5001msec) 00:35:53.723 slat (nsec): min=5069, max=64533, avg=13443.85, stdev=6599.15 00:35:53.723 clat (usec): min=798, max=8633, avg=4350.59, stdev=700.33 00:35:53.723 lat (usec): min=812, max=8663, avg=4364.03, stdev=699.86 00:35:53.723 clat percentiles (usec): 00:35:53.723 | 1.00th=[ 2835], 5.00th=[ 3556], 10.00th=[ 3785], 20.00th=[ 4015], 00:35:53.723 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:53.723 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5866], 00:35:53.723 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 7832], 99.95th=[ 8094], 00:35:53.723 | 99.99th=[ 8586] 00:35:53.723 bw ( KiB/s): min=13424, max=15168, per=24.57%, avg=14547.22, stdev=526.20, samples=9 00:35:53.723 iops : min= 1678, max= 1896, avg=1818.33, stdev=65.77, samples=9 00:35:53.723 lat (usec) : 1000=0.09% 00:35:53.723 lat (msec) : 2=0.34%, 4=19.30%, 10=80.27% 00:35:53.723 cpu : usr=93.52%, sys=5.90%, ctx=10, majf=0, minf=9 00:35:53.723 IO depths : 1=0.1%, 2=9.7%, 4=63.1%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.723 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.723 issued rwts: total=9098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.723 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:53.723 filename1: (groupid=0, jobs=1): err= 0: pid=3706687: Thu Jul 25 19:09:04 2024 00:35:53.723 read: IOPS=1820, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5001msec) 00:35:53.723 slat (nsec): min=7256, max=63083, avg=13191.77, stdev=6050.18 00:35:53.723 clat (usec): min=918, max=7924, avg=4351.74, stdev=705.24 00:35:53.723 lat (usec): min=931, max=7932, avg=4364.93, stdev=704.82 00:35:53.723 clat percentiles (usec): 00:35:53.723 | 1.00th=[ 2933], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 3982], 00:35:53.723 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:53.723 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5080], 95.00th=[ 5997], 00:35:53.723 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 7635], 99.95th=[ 7701], 00:35:53.723 | 99.99th=[ 7898] 00:35:53.723 bw ( KiB/s): min=13680, max=15184, per=24.63%, avg=14584.89, stdev=537.14, samples=9 00:35:53.723 iops : min= 1710, max= 1898, avg=1823.11, stdev=67.14, samples=9 00:35:53.723 lat (usec) : 1000=0.01% 00:35:53.723 lat (msec) : 2=0.25%, 4=20.74%, 10=78.99% 00:35:53.723 cpu : usr=93.40%, sys=6.00%, ctx=11, majf=0, minf=0 00:35:53.723 IO depths : 1=0.1%, 2=8.7%, 4=63.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.723 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.723 issued rwts: total=9102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.723 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:53.723 filename1: (groupid=0, jobs=1): err= 0: pid=3706688: Thu Jul 25 19:09:04 2024 00:35:53.723 read: IOPS=1868, BW=14.6MiB/s (15.3MB/s)(73.0MiB/5002msec) 00:35:53.723 slat (nsec): min=5066, max=64515, avg=13516.09, stdev=6537.11 00:35:53.723 clat (usec): min=800, max=7785, avg=4234.71, stdev=693.62 00:35:53.723 lat (usec): min=812, max=7797, avg=4248.23, stdev=693.41 00:35:53.723 clat percentiles (usec): 00:35:53.723 | 1.00th=[ 2704], 5.00th=[ 3261], 10.00th=[ 3523], 20.00th=[ 3818], 00:35:53.723 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:35:53.723 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5604], 00:35:53.723 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[ 7570], 00:35:53.723 | 99.99th=[ 7767] 00:35:53.723 bw ( KiB/s): min=14208, max=15664, per=25.28%, avg=14970.67, stdev=436.79, samples=9 00:35:53.723 iops : min= 1776, max= 1958, avg=1871.33, stdev=54.60, samples=9 00:35:53.723 lat (usec) : 1000=0.04% 00:35:53.723 lat (msec) : 2=0.33%, 4=28.72%, 10=70.90% 00:35:53.723 cpu : usr=92.88%, sys=6.52%, ctx=10, majf=0, minf=9 00:35:53.723 IO depths : 1=0.1%, 2=10.3%, 4=61.7%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:53.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.723 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.723 issued rwts: total=9348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.723 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:53.723 00:35:53.723 Run status group 0 (all jobs): 00:35:53.723 READ: bw=57.8MiB/s (60.6MB/s), 14.2MiB/s-14.8MiB/s (14.9MB/s-15.5MB/s), io=289MiB (303MB), run=5001-5004msec 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.723 00:35:53.723 real 0m24.215s 00:35:53.723 user 4m31.860s 00:35:53.723 sys 0m7.724s 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:53.723 19:09:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:53.723 ************************************ 00:35:53.723 END TEST fio_dif_rand_params 00:35:53.723 ************************************ 00:35:53.723 19:09:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:53.723 19:09:05 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:53.723 19:09:05 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:53.723 19:09:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:53.723 ************************************ 00:35:53.723 START TEST fio_dif_digest 00:35:53.723 ************************************ 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.723 bdev_null0 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.723 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:53.724 [2024-07-25 19:09:05.280174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:53.724 { 00:35:53.724 "params": { 00:35:53.724 "name": "Nvme$subsystem", 00:35:53.724 "trtype": "$TEST_TRANSPORT", 00:35:53.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:53.724 "adrfam": "ipv4", 00:35:53.724 "trsvcid": "$NVMF_PORT", 00:35:53.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:53.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:53.724 "hdgst": ${hdgst:-false}, 00:35:53.724 "ddgst": ${ddgst:-false} 00:35:53.724 }, 00:35:53.724 "method": "bdev_nvme_attach_controller" 00:35:53.724 } 00:35:53.724 EOF 00:35:53.724 )") 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:53.724 "params": { 00:35:53.724 "name": "Nvme0", 00:35:53.724 "trtype": "tcp", 00:35:53.724 "traddr": "10.0.0.2", 00:35:53.724 "adrfam": "ipv4", 00:35:53.724 "trsvcid": "4420", 00:35:53.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.724 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.724 "hdgst": true, 00:35:53.724 "ddgst": true 00:35:53.724 }, 00:35:53.724 "method": "bdev_nvme_attach_controller" 00:35:53.724 }' 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:53.724 19:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:53.724 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:53.724 ... 00:35:53.724 fio-3.35 00:35:53.724 Starting 3 threads 00:35:53.724 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.915 00:36:05.915 filename0: (groupid=0, jobs=1): err= 0: pid=3707558: Thu Jul 25 19:09:16 2024 00:36:05.915 read: IOPS=198, BW=24.8MiB/s (26.1MB/s)(250MiB/10046msec) 00:36:05.915 slat (nsec): min=4766, max=43931, avg=14759.68, stdev=3271.81 00:36:05.915 clat (usec): min=10244, max=49340, avg=15052.10, stdev=1416.42 00:36:05.915 lat (usec): min=10258, max=49354, avg=15066.86, stdev=1416.50 00:36:05.915 clat percentiles (usec): 00:36:05.915 | 1.00th=[12780], 5.00th=[13566], 10.00th=[13829], 20.00th=[14353], 00:36:05.915 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:36:05.915 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:36:05.916 | 99.00th=[17433], 99.50th=[17957], 99.90th=[48497], 99.95th=[49546], 00:36:05.916 | 99.99th=[49546] 00:36:05.916 bw ( KiB/s): min=24576, max=26112, per=33.42%, avg=25523.20, stdev=390.46, samples=20 00:36:05.916 iops : min= 192, max= 204, avg=199.40, stdev= 3.05, samples=20 00:36:05.916 lat (msec) : 20=99.85%, 50=0.15% 00:36:05.916 cpu : usr=89.29%, sys=9.07%, ctx=252, majf=0, minf=184 00:36:05.916 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.916 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:05.916 filename0: (groupid=0, jobs=1): err= 0: pid=3707559: Thu Jul 25 19:09:16 2024 00:36:05.916 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(250MiB/10047msec) 00:36:05.916 slat (usec): min=4, max=332, avg=18.00, stdev= 9.54 00:36:05.916 clat (usec): min=11424, max=55967, avg=15017.94, stdev=1551.42 00:36:05.916 lat (usec): min=11438, max=55982, avg=15035.93, stdev=1551.40 00:36:05.916 clat percentiles (usec): 00:36:05.916 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13698], 20.00th=[14222], 00:36:05.916 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15139], 00:36:05.916 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:36:05.916 | 99.00th=[17433], 99.50th=[17695], 99.90th=[20055], 99.95th=[49546], 00:36:05.916 | 99.99th=[55837] 00:36:05.916 bw ( KiB/s): min=25037, max=26368, per=33.50%, avg=25584.65, stdev=320.42, samples=20 00:36:05.916 iops : min= 195, max= 206, avg=199.85, stdev= 2.56, samples=20 00:36:05.916 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:36:05.916 cpu : usr=88.17%, sys=9.31%, ctx=577, majf=0, minf=135 00:36:05.916 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.916 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:05.916 filename0: (groupid=0, jobs=1): err= 0: pid=3707560: Thu Jul 25 19:09:16 2024 00:36:05.916 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(250MiB/10045msec) 00:36:05.916 slat (nsec): min=4810, max=41407, avg=16571.13, stdev=2732.47 00:36:05.916 clat (usec): min=12062, max=55109, avg=15056.43, stdev=1473.41 00:36:05.916 lat (usec): min=12082, max=55126, avg=15073.00, stdev=1473.36 00:36:05.916 clat percentiles (usec): 00:36:05.916 | 1.00th=[13042], 5.00th=[13566], 10.00th=[13829], 20.00th=[14353], 00:36:05.916 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:36:05.916 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:36:05.916 | 99.00th=[17433], 99.50th=[17695], 99.90th=[47973], 99.95th=[55313], 00:36:05.916 | 99.99th=[55313] 00:36:05.916 bw ( KiB/s): min=25088, max=26112, per=33.42%, avg=25523.20, stdev=288.92, samples=20 00:36:05.916 iops : min= 196, max= 204, avg=199.40, stdev= 2.26, samples=20 00:36:05.916 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:36:05.916 cpu : usr=92.05%, sys=7.38%, ctx=24, majf=0, minf=109 00:36:05.916 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.916 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:05.916 00:36:05.916 Run status group 0 (all jobs): 00:36:05.916 READ: bw=74.6MiB/s (78.2MB/s), 24.8MiB/s-24.9MiB/s (26.0MB/s-26.1MB/s), io=749MiB (786MB), run=10045-10047msec 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.916 00:36:05.916 real 0m11.281s 00:36:05.916 user 0m28.358s 00:36:05.916 sys 0m2.896s 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:05.916 19:09:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:05.916 ************************************ 00:36:05.916 END TEST fio_dif_digest 00:36:05.916 ************************************ 00:36:05.916 19:09:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:05.916 19:09:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:05.916 rmmod nvme_tcp 00:36:05.916 rmmod nvme_fabrics 00:36:05.916 rmmod nvme_keyring 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3701404 ']' 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3701404 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3701404 ']' 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3701404 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3701404 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3701404' 00:36:05.916 killing process with pid 3701404 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3701404 00:36:05.916 19:09:16 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3701404 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:05.916 19:09:16 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:06.174 Waiting for block devices as requested 00:36:06.174 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:06.432 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:06.432 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:06.432 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:06.690 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:06.690 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:06.690 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:06.690 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:06.948 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:06.948 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:06.948 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:06.948 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.206 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.206 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.206 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.206 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:07.465 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:07.465 19:09:19 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:07.465 19:09:19 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:07.465 19:09:19 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:07.465 19:09:19 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:07.465 19:09:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.465 19:09:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:07.465 19:09:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.000 19:09:21 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:10.000 00:36:10.000 real 1m6.671s 00:36:10.000 user 6m27.653s 00:36:10.000 sys 0m20.041s 00:36:10.000 19:09:21 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:10.000 19:09:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.000 ************************************ 00:36:10.000 END TEST nvmf_dif 00:36:10.000 ************************************ 00:36:10.000 19:09:21 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:10.000 19:09:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:10.000 19:09:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:10.000 19:09:21 -- common/autotest_common.sh@10 -- # set +x 00:36:10.000 ************************************ 00:36:10.000 START TEST nvmf_abort_qd_sizes 00:36:10.000 ************************************ 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:10.000 * Looking for test storage... 00:36:10.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.000 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:10.001 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:10.001 19:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:10.001 19:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:11.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:11.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:11.902 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:11.903 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:11.903 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:11.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:11.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:36:11.903 00:36:11.903 --- 10.0.0.2 ping statistics --- 00:36:11.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.903 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:11.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:11.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:36:11.903 00:36:11.903 --- 10.0.0.1 ping statistics --- 00:36:11.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.903 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:11.903 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:12.834 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.834 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.834 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.834 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.834 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.834 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:13.092 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:13.092 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:13.092 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:13.092 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:13.092 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:13.092 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:13.092 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:13.092 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:13.092 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:13.092 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:14.025 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3712959 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3712959 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3712959 ']' 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:14.025 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.025 [2024-07-25 19:09:25.900916] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:14.025 [2024-07-25 19:09:25.901010] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.284 EAL: No free 2048 kB hugepages reported on node 1 00:36:14.284 [2024-07-25 19:09:25.971366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:14.284 [2024-07-25 19:09:26.068572] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.284 [2024-07-25 19:09:26.068634] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.284 [2024-07-25 19:09:26.068659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.284 [2024-07-25 19:09:26.068675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.284 [2024-07-25 19:09:26.068687] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.284 [2024-07-25 19:09:26.068777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.284 [2024-07-25 19:09:26.068839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:14.284 [2024-07-25 19:09:26.068935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:14.284 [2024-07-25 19:09:26.068937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:14.542 19:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.542 ************************************ 00:36:14.542 START TEST spdk_target_abort 00:36:14.542 ************************************ 00:36:14.542 19:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:14.542 19:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:14.542 19:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:14.542 19:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.542 19:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.849 spdk_targetn1 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.849 [2024-07-25 19:09:29.106829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.849 [2024-07-25 19:09:29.139107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:17.849 19:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:17.849 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.125 Initializing NVMe Controllers 00:36:21.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:21.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:21.125 Initialization complete. Launching workers. 00:36:21.125 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12328, failed: 0 00:36:21.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1300, failed to submit 11028 00:36:21.126 success 774, unsuccess 526, failed 0 00:36:21.126 19:09:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.126 19:09:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.126 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.401 Initializing NVMe Controllers 00:36:24.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:24.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:24.401 Initialization complete. Launching workers. 00:36:24.401 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8721, failed: 0 00:36:24.401 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7503 00:36:24.401 success 335, unsuccess 883, failed 0 00:36:24.401 19:09:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.401 19:09:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.401 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.933 Initializing NVMe Controllers 00:36:26.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:26.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:26.933 Initialization complete. Launching workers. 00:36:26.933 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31816, failed: 0 00:36:26.933 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2648, failed to submit 29168 00:36:26.933 success 530, unsuccess 2118, failed 0 00:36:26.933 19:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:26.933 19:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.933 19:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:26.933 19:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.933 19:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:26.933 19:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.933 19:09:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3712959 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3712959 ']' 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3712959 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3712959 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3712959' 00:36:28.310 killing process with pid 3712959 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3712959 00:36:28.310 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3712959 00:36:28.568 00:36:28.568 real 0m14.010s 00:36:28.568 user 0m53.162s 00:36:28.568 sys 0m2.605s 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:28.568 ************************************ 00:36:28.568 END TEST spdk_target_abort 00:36:28.568 ************************************ 00:36:28.568 19:09:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:28.568 19:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:28.568 19:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:28.568 19:09:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:28.568 ************************************ 00:36:28.568 START TEST kernel_target_abort 00:36:28.568 ************************************ 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:28.568 19:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.506 Waiting for block devices as requested 00:36:29.764 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:29.764 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:29.764 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.022 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.022 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.022 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.022 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.281 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.281 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:30.281 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:30.281 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.538 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.538 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.538 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.539 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.798 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.798 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:30.798 No valid GPT data, bailing 00:36:30.798 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:31.058 00:36:31.058 Discovery Log Number of Records 2, Generation counter 2 00:36:31.058 =====Discovery Log Entry 0====== 00:36:31.058 trtype: tcp 00:36:31.058 adrfam: ipv4 00:36:31.058 subtype: current discovery subsystem 00:36:31.058 treq: not specified, sq flow control disable supported 00:36:31.058 portid: 1 00:36:31.058 trsvcid: 4420 00:36:31.058 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:31.058 traddr: 10.0.0.1 00:36:31.058 eflags: none 00:36:31.058 sectype: none 00:36:31.058 =====Discovery Log Entry 1====== 00:36:31.058 trtype: tcp 00:36:31.058 adrfam: ipv4 00:36:31.058 subtype: nvme subsystem 00:36:31.058 treq: not specified, sq flow control disable supported 00:36:31.058 portid: 1 00:36:31.058 trsvcid: 4420 00:36:31.058 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:31.058 traddr: 10.0.0.1 00:36:31.058 eflags: none 00:36:31.058 sectype: none 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:31.058 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:31.059 19:09:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.059 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.346 Initializing NVMe Controllers 00:36:34.346 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:34.346 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:34.346 Initialization complete. Launching workers. 00:36:34.346 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40796, failed: 0 00:36:34.346 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40796, failed to submit 0 00:36:34.346 success 0, unsuccess 40796, failed 0 00:36:34.346 19:09:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:34.346 19:09:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.346 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.632 Initializing NVMe Controllers 00:36:37.632 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.632 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.632 Initialization complete. Launching workers. 00:36:37.632 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76464, failed: 0 00:36:37.632 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19270, failed to submit 57194 00:36:37.632 success 0, unsuccess 19270, failed 0 00:36:37.632 19:09:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.632 19:09:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.632 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.923 Initializing NVMe Controllers 00:36:40.923 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.923 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.923 Initialization complete. Launching workers. 00:36:40.923 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76822, failed: 0 00:36:40.923 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19178, failed to submit 57644 00:36:40.923 success 0, unsuccess 19178, failed 0 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:40.923 19:09:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:41.489 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:41.489 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:41.489 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:41.489 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:41.489 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:41.489 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:41.489 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:41.489 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:41.489 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:41.489 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:41.489 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:41.754 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:41.754 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:41.754 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:41.754 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:41.754 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:42.725 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:42.725 00:36:42.725 real 0m14.151s 00:36:42.725 user 0m6.071s 00:36:42.725 sys 0m3.178s 00:36:42.725 19:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:42.725 19:09:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:42.725 ************************************ 00:36:42.725 END TEST kernel_target_abort 00:36:42.725 ************************************ 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:42.725 rmmod nvme_tcp 00:36:42.725 rmmod nvme_fabrics 00:36:42.725 rmmod nvme_keyring 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3712959 ']' 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3712959 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3712959 ']' 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3712959 00:36:42.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3712959) - No such process 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3712959 is not found' 00:36:42.725 Process with pid 3712959 is not found 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:42.725 19:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:44.099 Waiting for block devices as requested 00:36:44.099 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:44.099 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:44.099 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:44.359 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:44.359 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:44.359 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:44.359 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:44.359 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:44.618 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:44.618 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:44.618 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:44.876 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:44.876 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:44.876 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:44.876 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:45.134 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:45.134 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:45.134 19:09:56 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:45.134 19:09:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:45.134 19:09:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:45.134 19:09:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:45.134 19:09:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.134 19:09:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:45.134 19:09:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.668 19:09:59 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:47.668 00:36:47.668 real 0m37.707s 00:36:47.668 user 1m1.381s 00:36:47.668 sys 0m9.200s 00:36:47.668 19:09:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:47.668 19:09:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:47.668 ************************************ 00:36:47.668 END TEST nvmf_abort_qd_sizes 00:36:47.668 ************************************ 00:36:47.668 19:09:59 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:47.668 19:09:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:47.668 19:09:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:47.668 19:09:59 -- common/autotest_common.sh@10 -- # set +x 00:36:47.668 ************************************ 00:36:47.668 START TEST keyring_file 00:36:47.668 ************************************ 00:36:47.668 19:09:59 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:47.668 * Looking for test storage... 00:36:47.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:47.668 19:09:59 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:47.668 19:09:59 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.668 19:09:59 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:47.669 19:09:59 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.669 19:09:59 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.669 19:09:59 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.669 19:09:59 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.669 19:09:59 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.669 19:09:59 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.669 19:09:59 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:47.669 19:09:59 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ubiAtDpPgV 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ubiAtDpPgV 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ubiAtDpPgV 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ubiAtDpPgV 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dvWqOVa7lO 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:47.669 19:09:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dvWqOVa7lO 00:36:47.669 19:09:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dvWqOVa7lO 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dvWqOVa7lO 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@30 -- # tgtpid=3718718 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:47.669 19:09:59 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3718718 00:36:47.669 19:09:59 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3718718 ']' 00:36:47.669 19:09:59 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.669 19:09:59 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:47.669 19:09:59 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.669 19:09:59 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:47.669 19:09:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:47.669 [2024-07-25 19:09:59.280761] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:47.669 [2024-07-25 19:09:59.280854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718718 ] 00:36:47.669 EAL: No free 2048 kB hugepages reported on node 1 00:36:47.669 [2024-07-25 19:09:59.339007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.669 [2024-07-25 19:09:59.423560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:47.927 19:09:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:47.927 [2024-07-25 19:09:59.662981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.927 null0 00:36:47.927 [2024-07-25 19:09:59.695033] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:47.927 [2024-07-25 19:09:59.695544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:47.927 [2024-07-25 19:09:59.703052] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.927 19:09:59 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.927 19:09:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:47.927 [2024-07-25 19:09:59.715082] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:47.927 request: 00:36:47.927 { 00:36:47.928 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.928 "secure_channel": false, 00:36:47.928 "listen_address": { 00:36:47.928 "trtype": "tcp", 00:36:47.928 "traddr": "127.0.0.1", 00:36:47.928 "trsvcid": "4420" 00:36:47.928 }, 00:36:47.928 "method": "nvmf_subsystem_add_listener", 00:36:47.928 "req_id": 1 00:36:47.928 } 00:36:47.928 Got JSON-RPC error response 00:36:47.928 response: 00:36:47.928 { 00:36:47.928 "code": -32602, 00:36:47.928 "message": "Invalid parameters" 00:36:47.928 } 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:47.928 19:09:59 keyring_file -- keyring/file.sh@46 -- # bperfpid=3718728 00:36:47.928 19:09:59 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:47.928 19:09:59 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3718728 /var/tmp/bperf.sock 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3718728 ']' 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:47.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:47.928 19:09:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:47.928 [2024-07-25 19:09:59.762175] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:47.928 [2024-07-25 19:09:59.762251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718728 ] 00:36:47.928 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.186 [2024-07-25 19:09:59.822461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.186 [2024-07-25 19:09:59.912726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.186 19:10:00 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:48.186 19:10:00 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:48.186 19:10:00 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ubiAtDpPgV 00:36:48.186 19:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ubiAtDpPgV 00:36:48.444 19:10:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dvWqOVa7lO 00:36:48.444 19:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dvWqOVa7lO 00:36:48.702 19:10:00 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:48.702 19:10:00 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:48.702 19:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.702 19:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.702 19:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.960 19:10:00 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ubiAtDpPgV == \/\t\m\p\/\t\m\p\.\u\b\i\A\t\D\p\P\g\V ]] 00:36:48.960 19:10:00 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:48.960 19:10:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:48.960 19:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.960 19:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.960 19:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:49.217 19:10:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dvWqOVa7lO == \/\t\m\p\/\t\m\p\.\d\v\W\q\O\V\a\7\l\O ]] 00:36:49.217 19:10:01 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:49.218 19:10:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.218 19:10:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.218 19:10:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.218 19:10:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.218 19:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.475 19:10:01 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:49.475 19:10:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:49.475 19:10:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:49.475 19:10:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.475 19:10:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.475 19:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.475 19:10:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:49.732 19:10:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:49.732 19:10:01 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.732 19:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.988 [2024-07-25 19:10:01.739275] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:49.988 nvme0n1 00:36:49.988 19:10:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:49.988 19:10:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.988 19:10:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.988 19:10:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.988 19:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.988 19:10:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.255 19:10:02 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:50.255 19:10:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:50.255 19:10:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:50.255 19:10:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.255 19:10:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.255 19:10:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.255 19:10:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:50.511 19:10:02 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:50.511 19:10:02 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:50.769 Running I/O for 1 seconds... 00:36:51.706 00:36:51.706 Latency(us) 00:36:51.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.706 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:51.706 nvme0n1 : 1.01 7569.50 29.57 0.00 0.00 16822.76 4587.52 24758.04 00:36:51.706 =================================================================================================================== 00:36:51.706 Total : 7569.50 29.57 0.00 0.00 16822.76 4587.52 24758.04 00:36:51.706 0 00:36:51.706 19:10:03 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:51.706 19:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:51.963 19:10:03 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:51.963 19:10:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:51.963 19:10:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:51.963 19:10:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.963 19:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.963 19:10:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.220 19:10:03 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:52.220 19:10:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:52.220 19:10:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:52.220 19:10:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.220 19:10:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.220 19:10:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:52.220 19:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.477 19:10:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:52.477 19:10:04 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:52.477 19:10:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:52.477 19:10:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:52.477 19:10:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:52.477 19:10:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.477 19:10:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:52.478 19:10:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:52.478 19:10:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:52.478 19:10:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:52.736 [2024-07-25 19:10:04.445377] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:52.736 [2024-07-25 19:10:04.446302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02310 (107): Transport endpoint is not connected 00:36:52.736 [2024-07-25 19:10:04.447294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02310 (9): Bad file descriptor 00:36:52.736 [2024-07-25 19:10:04.448292] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:52.736 [2024-07-25 19:10:04.448313] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:52.736 [2024-07-25 19:10:04.448327] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:52.736 request: 00:36:52.736 { 00:36:52.736 "name": "nvme0", 00:36:52.736 "trtype": "tcp", 00:36:52.736 "traddr": "127.0.0.1", 00:36:52.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.736 "adrfam": "ipv4", 00:36:52.736 "trsvcid": "4420", 00:36:52.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.736 "psk": "key1", 00:36:52.736 "method": "bdev_nvme_attach_controller", 00:36:52.736 "req_id": 1 00:36:52.736 } 00:36:52.736 Got JSON-RPC error response 00:36:52.736 response: 00:36:52.736 { 00:36:52.736 "code": -5, 00:36:52.736 "message": "Input/output error" 00:36:52.736 } 00:36:52.736 19:10:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:52.736 19:10:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:52.736 19:10:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:52.736 19:10:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:52.736 19:10:04 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:52.736 19:10:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.736 19:10:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.736 19:10:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.736 19:10:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.736 19:10:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.993 19:10:04 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:52.993 19:10:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:52.993 19:10:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:52.993 19:10:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.993 19:10:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.993 19:10:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.993 19:10:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.249 19:10:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:53.249 19:10:04 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:53.249 19:10:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:53.506 19:10:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:53.506 19:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:53.764 19:10:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:53.764 19:10:05 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:53.764 19:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.021 19:10:05 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:54.021 19:10:05 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ubiAtDpPgV 00:36:54.021 19:10:05 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ubiAtDpPgV 00:36:54.021 19:10:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:54.021 19:10:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ubiAtDpPgV 00:36:54.021 19:10:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:54.021 19:10:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:54.021 19:10:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:54.021 19:10:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:54.021 19:10:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ubiAtDpPgV 00:36:54.022 19:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ubiAtDpPgV 00:36:54.299 [2024-07-25 19:10:05.973438] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ubiAtDpPgV': 0100660 00:36:54.299 [2024-07-25 19:10:05.973478] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:54.299 request: 00:36:54.299 { 00:36:54.299 "name": "key0", 00:36:54.299 "path": "/tmp/tmp.ubiAtDpPgV", 00:36:54.299 "method": "keyring_file_add_key", 00:36:54.299 "req_id": 1 00:36:54.299 } 00:36:54.299 Got JSON-RPC error response 00:36:54.299 response: 00:36:54.299 { 00:36:54.299 "code": -1, 00:36:54.299 "message": "Operation not permitted" 00:36:54.299 } 00:36:54.299 19:10:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:54.299 19:10:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:54.299 19:10:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:54.299 19:10:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:54.299 19:10:05 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ubiAtDpPgV 00:36:54.299 19:10:05 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ubiAtDpPgV 00:36:54.299 19:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ubiAtDpPgV 00:36:54.556 19:10:06 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ubiAtDpPgV 00:36:54.556 19:10:06 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:54.556 19:10:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:54.556 19:10:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.556 19:10:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.556 19:10:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.556 19:10:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:54.813 19:10:06 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:54.813 19:10:06 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:54.813 19:10:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:54.813 19:10:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:54.813 19:10:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:54.813 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:54.813 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:54.813 19:10:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:54.813 19:10:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:54.813 19:10:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.070 [2024-07-25 19:10:06.699367] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ubiAtDpPgV': No such file or directory 00:36:55.070 [2024-07-25 19:10:06.699416] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:55.070 [2024-07-25 19:10:06.699447] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:55.070 [2024-07-25 19:10:06.699460] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:55.071 [2024-07-25 19:10:06.699473] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:55.071 request: 00:36:55.071 { 00:36:55.071 "name": "nvme0", 00:36:55.071 "trtype": "tcp", 00:36:55.071 "traddr": "127.0.0.1", 00:36:55.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:55.071 "adrfam": "ipv4", 00:36:55.071 "trsvcid": "4420", 00:36:55.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:55.071 "psk": "key0", 00:36:55.071 "method": "bdev_nvme_attach_controller", 00:36:55.071 "req_id": 1 00:36:55.071 } 00:36:55.071 Got JSON-RPC error response 00:36:55.071 response: 00:36:55.071 { 00:36:55.071 "code": -19, 00:36:55.071 "message": "No such device" 00:36:55.071 } 00:36:55.071 19:10:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:55.071 19:10:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:55.071 19:10:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:55.071 19:10:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:55.071 19:10:06 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:55.071 19:10:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:55.328 19:10:06 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:55.328 19:10:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:55.328 19:10:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:55.328 19:10:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:55.328 19:10:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:55.328 19:10:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:55.328 19:10:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ax7YCH4bMS 00:36:55.328 19:10:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:55.328 19:10:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:55.328 19:10:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:55.328 19:10:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:55.328 19:10:06 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:55.328 19:10:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:55.328 19:10:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:55.328 19:10:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ax7YCH4bMS 00:36:55.328 19:10:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ax7YCH4bMS 00:36:55.328 19:10:07 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ax7YCH4bMS 00:36:55.328 19:10:07 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ax7YCH4bMS 00:36:55.328 19:10:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ax7YCH4bMS 00:36:55.586 19:10:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.586 19:10:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:55.843 nvme0n1 00:36:55.843 19:10:07 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:55.843 19:10:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:55.843 19:10:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.843 19:10:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.843 19:10:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.843 19:10:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.100 19:10:07 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:56.100 19:10:07 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:56.100 19:10:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:56.358 19:10:08 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:56.358 19:10:08 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:56.358 19:10:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.358 19:10:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.358 19:10:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.616 19:10:08 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:56.616 19:10:08 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:56.616 19:10:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:56.616 19:10:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.616 19:10:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.616 19:10:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.616 19:10:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.874 19:10:08 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:56.874 19:10:08 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:56.874 19:10:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:57.132 19:10:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:57.132 19:10:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.132 19:10:08 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:57.389 19:10:09 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:57.389 19:10:09 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ax7YCH4bMS 00:36:57.389 19:10:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ax7YCH4bMS 00:36:57.647 19:10:09 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dvWqOVa7lO 00:36:57.647 19:10:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dvWqOVa7lO 00:36:57.904 19:10:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:57.904 19:10:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.163 nvme0n1 00:36:58.163 19:10:09 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:58.163 19:10:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:58.444 19:10:10 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:58.444 "subsystems": [ 00:36:58.444 { 00:36:58.444 "subsystem": "keyring", 00:36:58.444 "config": [ 00:36:58.444 { 00:36:58.444 "method": "keyring_file_add_key", 00:36:58.444 "params": { 00:36:58.444 "name": "key0", 00:36:58.444 "path": "/tmp/tmp.ax7YCH4bMS" 00:36:58.444 } 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "method": "keyring_file_add_key", 00:36:58.444 "params": { 00:36:58.444 "name": "key1", 00:36:58.444 "path": "/tmp/tmp.dvWqOVa7lO" 00:36:58.444 } 00:36:58.444 } 00:36:58.444 ] 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "subsystem": "iobuf", 00:36:58.444 "config": [ 00:36:58.444 { 00:36:58.444 "method": "iobuf_set_options", 00:36:58.444 "params": { 00:36:58.444 "small_pool_count": 8192, 00:36:58.444 "large_pool_count": 1024, 00:36:58.444 "small_bufsize": 8192, 00:36:58.444 "large_bufsize": 135168 00:36:58.444 } 00:36:58.444 } 00:36:58.444 ] 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "subsystem": "sock", 00:36:58.444 "config": [ 00:36:58.444 { 00:36:58.444 "method": "sock_set_default_impl", 00:36:58.444 "params": { 00:36:58.444 "impl_name": "posix" 00:36:58.444 } 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "method": "sock_impl_set_options", 00:36:58.444 "params": { 00:36:58.444 "impl_name": "ssl", 00:36:58.444 "recv_buf_size": 4096, 00:36:58.444 "send_buf_size": 4096, 00:36:58.444 "enable_recv_pipe": true, 00:36:58.444 "enable_quickack": false, 00:36:58.444 "enable_placement_id": 0, 00:36:58.444 "enable_zerocopy_send_server": true, 00:36:58.444 "enable_zerocopy_send_client": false, 00:36:58.444 "zerocopy_threshold": 0, 00:36:58.444 "tls_version": 0, 00:36:58.444 "enable_ktls": false 00:36:58.444 } 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "method": "sock_impl_set_options", 00:36:58.444 "params": { 00:36:58.444 "impl_name": "posix", 00:36:58.444 "recv_buf_size": 2097152, 00:36:58.444 "send_buf_size": 2097152, 00:36:58.444 "enable_recv_pipe": true, 00:36:58.444 "enable_quickack": false, 00:36:58.444 "enable_placement_id": 0, 00:36:58.444 "enable_zerocopy_send_server": true, 00:36:58.444 "enable_zerocopy_send_client": false, 00:36:58.444 "zerocopy_threshold": 0, 00:36:58.444 "tls_version": 0, 00:36:58.444 "enable_ktls": false 00:36:58.444 } 00:36:58.444 } 00:36:58.444 ] 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "subsystem": "vmd", 00:36:58.444 "config": [] 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "subsystem": "accel", 00:36:58.444 "config": [ 00:36:58.444 { 00:36:58.444 "method": "accel_set_options", 00:36:58.444 "params": { 00:36:58.444 "small_cache_size": 128, 00:36:58.444 "large_cache_size": 16, 00:36:58.444 "task_count": 2048, 00:36:58.444 "sequence_count": 2048, 00:36:58.444 "buf_count": 2048 00:36:58.444 } 00:36:58.444 } 00:36:58.444 ] 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "subsystem": "bdev", 00:36:58.444 "config": [ 00:36:58.444 { 00:36:58.444 "method": "bdev_set_options", 00:36:58.444 "params": { 00:36:58.444 "bdev_io_pool_size": 65535, 00:36:58.444 "bdev_io_cache_size": 256, 00:36:58.444 "bdev_auto_examine": true, 00:36:58.444 "iobuf_small_cache_size": 128, 00:36:58.444 "iobuf_large_cache_size": 16 00:36:58.444 } 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "method": "bdev_raid_set_options", 00:36:58.444 "params": { 00:36:58.444 "process_window_size_kb": 1024 00:36:58.444 } 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "method": "bdev_iscsi_set_options", 00:36:58.444 "params": { 00:36:58.444 "timeout_sec": 30 00:36:58.444 } 00:36:58.444 }, 00:36:58.444 { 00:36:58.444 "method": "bdev_nvme_set_options", 00:36:58.444 "params": { 00:36:58.444 "action_on_timeout": "none", 00:36:58.444 "timeout_us": 0, 00:36:58.444 "timeout_admin_us": 0, 00:36:58.444 "keep_alive_timeout_ms": 10000, 00:36:58.444 "arbitration_burst": 0, 00:36:58.444 "low_priority_weight": 0, 00:36:58.444 "medium_priority_weight": 0, 00:36:58.444 "high_priority_weight": 0, 00:36:58.444 "nvme_adminq_poll_period_us": 10000, 00:36:58.444 "nvme_ioq_poll_period_us": 0, 00:36:58.444 "io_queue_requests": 512, 00:36:58.444 "delay_cmd_submit": true, 00:36:58.444 "transport_retry_count": 4, 00:36:58.444 "bdev_retry_count": 3, 00:36:58.444 "transport_ack_timeout": 0, 00:36:58.444 "ctrlr_loss_timeout_sec": 0, 00:36:58.444 "reconnect_delay_sec": 0, 00:36:58.444 "fast_io_fail_timeout_sec": 0, 00:36:58.444 "disable_auto_failback": false, 00:36:58.444 "generate_uuids": false, 00:36:58.444 "transport_tos": 0, 00:36:58.444 "nvme_error_stat": false, 00:36:58.445 "rdma_srq_size": 0, 00:36:58.445 "io_path_stat": false, 00:36:58.445 "allow_accel_sequence": false, 00:36:58.445 "rdma_max_cq_size": 0, 00:36:58.445 "rdma_cm_event_timeout_ms": 0, 00:36:58.445 "dhchap_digests": [ 00:36:58.445 "sha256", 00:36:58.445 "sha384", 00:36:58.445 "sha512" 00:36:58.445 ], 00:36:58.445 "dhchap_dhgroups": [ 00:36:58.445 "null", 00:36:58.445 "ffdhe2048", 00:36:58.445 "ffdhe3072", 00:36:58.445 "ffdhe4096", 00:36:58.445 "ffdhe6144", 00:36:58.445 "ffdhe8192" 00:36:58.445 ] 00:36:58.445 } 00:36:58.445 }, 00:36:58.445 { 00:36:58.445 "method": "bdev_nvme_attach_controller", 00:36:58.445 "params": { 00:36:58.445 "name": "nvme0", 00:36:58.445 "trtype": "TCP", 00:36:58.445 "adrfam": "IPv4", 00:36:58.445 "traddr": "127.0.0.1", 00:36:58.445 "trsvcid": "4420", 00:36:58.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.445 "prchk_reftag": false, 00:36:58.445 "prchk_guard": false, 00:36:58.445 "ctrlr_loss_timeout_sec": 0, 00:36:58.445 "reconnect_delay_sec": 0, 00:36:58.445 "fast_io_fail_timeout_sec": 0, 00:36:58.445 "psk": "key0", 00:36:58.445 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.445 "hdgst": false, 00:36:58.445 "ddgst": false 00:36:58.445 } 00:36:58.445 }, 00:36:58.445 { 00:36:58.445 "method": "bdev_nvme_set_hotplug", 00:36:58.445 "params": { 00:36:58.445 "period_us": 100000, 00:36:58.445 "enable": false 00:36:58.445 } 00:36:58.445 }, 00:36:58.445 { 00:36:58.445 "method": "bdev_wait_for_examine" 00:36:58.445 } 00:36:58.445 ] 00:36:58.445 }, 00:36:58.445 { 00:36:58.445 "subsystem": "nbd", 00:36:58.445 "config": [] 00:36:58.445 } 00:36:58.445 ] 00:36:58.445 }' 00:36:58.445 19:10:10 keyring_file -- keyring/file.sh@114 -- # killprocess 3718728 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3718728 ']' 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3718728 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3718728 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3718728' 00:36:58.445 killing process with pid 3718728 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@965 -- # kill 3718728 00:36:58.445 Received shutdown signal, test time was about 1.000000 seconds 00:36:58.445 00:36:58.445 Latency(us) 00:36:58.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.445 =================================================================================================================== 00:36:58.445 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:58.445 19:10:10 keyring_file -- common/autotest_common.sh@970 -- # wait 3718728 00:36:58.703 19:10:10 keyring_file -- keyring/file.sh@117 -- # bperfpid=3720179 00:36:58.703 19:10:10 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3720179 /var/tmp/bperf.sock 00:36:58.703 19:10:10 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3720179 ']' 00:36:58.704 19:10:10 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:58.704 19:10:10 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:58.704 19:10:10 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:58.704 19:10:10 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:58.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:58.704 19:10:10 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:58.704 19:10:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:58.704 19:10:10 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:58.704 "subsystems": [ 00:36:58.704 { 00:36:58.704 "subsystem": "keyring", 00:36:58.704 "config": [ 00:36:58.704 { 00:36:58.704 "method": "keyring_file_add_key", 00:36:58.704 "params": { 00:36:58.704 "name": "key0", 00:36:58.704 "path": "/tmp/tmp.ax7YCH4bMS" 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "keyring_file_add_key", 00:36:58.704 "params": { 00:36:58.704 "name": "key1", 00:36:58.704 "path": "/tmp/tmp.dvWqOVa7lO" 00:36:58.704 } 00:36:58.704 } 00:36:58.704 ] 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "subsystem": "iobuf", 00:36:58.704 "config": [ 00:36:58.704 { 00:36:58.704 "method": "iobuf_set_options", 00:36:58.704 "params": { 00:36:58.704 "small_pool_count": 8192, 00:36:58.704 "large_pool_count": 1024, 00:36:58.704 "small_bufsize": 8192, 00:36:58.704 "large_bufsize": 135168 00:36:58.704 } 00:36:58.704 } 00:36:58.704 ] 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "subsystem": "sock", 00:36:58.704 "config": [ 00:36:58.704 { 00:36:58.704 "method": "sock_set_default_impl", 00:36:58.704 "params": { 00:36:58.704 "impl_name": "posix" 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "sock_impl_set_options", 00:36:58.704 "params": { 00:36:58.704 "impl_name": "ssl", 00:36:58.704 "recv_buf_size": 4096, 00:36:58.704 "send_buf_size": 4096, 00:36:58.704 "enable_recv_pipe": true, 00:36:58.704 "enable_quickack": false, 00:36:58.704 "enable_placement_id": 0, 00:36:58.704 "enable_zerocopy_send_server": true, 00:36:58.704 "enable_zerocopy_send_client": false, 00:36:58.704 "zerocopy_threshold": 0, 00:36:58.704 "tls_version": 0, 00:36:58.704 "enable_ktls": false 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "sock_impl_set_options", 00:36:58.704 "params": { 00:36:58.704 "impl_name": "posix", 00:36:58.704 "recv_buf_size": 2097152, 00:36:58.704 "send_buf_size": 2097152, 00:36:58.704 "enable_recv_pipe": true, 00:36:58.704 "enable_quickack": false, 00:36:58.704 "enable_placement_id": 0, 00:36:58.704 "enable_zerocopy_send_server": true, 00:36:58.704 "enable_zerocopy_send_client": false, 00:36:58.704 "zerocopy_threshold": 0, 00:36:58.704 "tls_version": 0, 00:36:58.704 "enable_ktls": false 00:36:58.704 } 00:36:58.704 } 00:36:58.704 ] 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "subsystem": "vmd", 00:36:58.704 "config": [] 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "subsystem": "accel", 00:36:58.704 "config": [ 00:36:58.704 { 00:36:58.704 "method": "accel_set_options", 00:36:58.704 "params": { 00:36:58.704 "small_cache_size": 128, 00:36:58.704 "large_cache_size": 16, 00:36:58.704 "task_count": 2048, 00:36:58.704 "sequence_count": 2048, 00:36:58.704 "buf_count": 2048 00:36:58.704 } 00:36:58.704 } 00:36:58.704 ] 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "subsystem": "bdev", 00:36:58.704 "config": [ 00:36:58.704 { 00:36:58.704 "method": "bdev_set_options", 00:36:58.704 "params": { 00:36:58.704 "bdev_io_pool_size": 65535, 00:36:58.704 "bdev_io_cache_size": 256, 00:36:58.704 "bdev_auto_examine": true, 00:36:58.704 "iobuf_small_cache_size": 128, 00:36:58.704 "iobuf_large_cache_size": 16 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "bdev_raid_set_options", 00:36:58.704 "params": { 00:36:58.704 "process_window_size_kb": 1024 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "bdev_iscsi_set_options", 00:36:58.704 "params": { 00:36:58.704 "timeout_sec": 30 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "bdev_nvme_set_options", 00:36:58.704 "params": { 00:36:58.704 "action_on_timeout": "none", 00:36:58.704 "timeout_us": 0, 00:36:58.704 "timeout_admin_us": 0, 00:36:58.704 "keep_alive_timeout_ms": 10000, 00:36:58.704 "arbitration_burst": 0, 00:36:58.704 "low_priority_weight": 0, 00:36:58.704 "medium_priority_weight": 0, 00:36:58.704 "high_priority_weight": 0, 00:36:58.704 "nvme_adminq_poll_period_us": 10000, 00:36:58.704 "nvme_ioq_poll_period_us": 0, 00:36:58.704 "io_queue_requests": 512, 00:36:58.704 "delay_cmd_submit": true, 00:36:58.704 "transport_retry_count": 4, 00:36:58.704 "bdev_retry_count": 3, 00:36:58.704 "transport_ack_timeout": 0, 00:36:58.704 "ctrlr_loss_timeout_sec": 0, 00:36:58.704 "reconnect_delay_sec": 0, 00:36:58.704 "fast_io_fail_timeout_sec": 0, 00:36:58.704 "disable_auto_failback": false, 00:36:58.704 "generate_uuids": false, 00:36:58.704 "transport_tos": 0, 00:36:58.704 "nvme_error_stat": false, 00:36:58.704 "rdma_srq_size": 0, 00:36:58.704 "io_path_stat": false, 00:36:58.704 "allow_accel_sequence": false, 00:36:58.704 "rdma_max_cq_size": 0, 00:36:58.704 "rdma_cm_event_timeout_ms": 0, 00:36:58.704 "dhchap_digests": [ 00:36:58.704 "sha256", 00:36:58.704 "sha384", 00:36:58.704 "sha512" 00:36:58.704 ], 00:36:58.704 "dhchap_dhgroups": [ 00:36:58.704 "null", 00:36:58.704 "ffdhe2048", 00:36:58.704 "ffdhe3072", 00:36:58.704 "ffdhe4096", 00:36:58.704 "ffdhe6144", 00:36:58.704 "ffdhe8192" 00:36:58.704 ] 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "bdev_nvme_attach_controller", 00:36:58.704 "params": { 00:36:58.704 "name": "nvme0", 00:36:58.704 "trtype": "TCP", 00:36:58.704 "adrfam": "IPv4", 00:36:58.704 "traddr": "127.0.0.1", 00:36:58.704 "trsvcid": "4420", 00:36:58.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.704 "prchk_reftag": false, 00:36:58.704 "prchk_guard": false, 00:36:58.704 "ctrlr_loss_timeout_sec": 0, 00:36:58.704 "reconnect_delay_sec": 0, 00:36:58.704 "fast_io_fail_timeout_sec": 0, 00:36:58.704 "psk": "key0", 00:36:58.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.704 "hdgst": false, 00:36:58.704 "ddgst": false 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "bdev_nvme_set_hotplug", 00:36:58.704 "params": { 00:36:58.704 "period_us": 100000, 00:36:58.704 "enable": false 00:36:58.704 } 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "method": "bdev_wait_for_examine" 00:36:58.704 } 00:36:58.704 ] 00:36:58.704 }, 00:36:58.704 { 00:36:58.704 "subsystem": "nbd", 00:36:58.704 "config": [] 00:36:58.704 } 00:36:58.704 ] 00:36:58.704 }' 00:36:58.704 [2024-07-25 19:10:10.464296] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:36:58.704 [2024-07-25 19:10:10.464371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720179 ] 00:36:58.704 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.704 [2024-07-25 19:10:10.522035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.963 [2024-07-25 19:10:10.610898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.963 [2024-07-25 19:10:10.792012] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:59.897 19:10:11 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:59.897 19:10:11 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:59.897 19:10:11 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:59.897 19:10:11 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:59.897 19:10:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.897 19:10:11 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:59.897 19:10:11 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:59.897 19:10:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:59.897 19:10:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:59.897 19:10:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.897 19:10:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.897 19:10:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.155 19:10:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:00.155 19:10:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:00.155 19:10:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:00.155 19:10:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.155 19:10:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.155 19:10:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.155 19:10:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:00.413 19:10:12 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:00.413 19:10:12 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:00.413 19:10:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:00.413 19:10:12 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:00.671 19:10:12 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:00.671 19:10:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:00.671 19:10:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ax7YCH4bMS /tmp/tmp.dvWqOVa7lO 00:37:00.671 19:10:12 keyring_file -- keyring/file.sh@20 -- # killprocess 3720179 00:37:00.671 19:10:12 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3720179 ']' 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3720179 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3720179 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3720179' 00:37:00.672 killing process with pid 3720179 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@965 -- # kill 3720179 00:37:00.672 Received shutdown signal, test time was about 1.000000 seconds 00:37:00.672 00:37:00.672 Latency(us) 00:37:00.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.672 =================================================================================================================== 00:37:00.672 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:00.672 19:10:12 keyring_file -- common/autotest_common.sh@970 -- # wait 3720179 00:37:00.929 19:10:12 keyring_file -- keyring/file.sh@21 -- # killprocess 3718718 00:37:00.929 19:10:12 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3718718 ']' 00:37:00.929 19:10:12 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3718718 00:37:00.929 19:10:12 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:00.929 19:10:12 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:00.929 19:10:12 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3718718 00:37:00.930 19:10:12 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:00.930 19:10:12 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:00.930 19:10:12 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3718718' 00:37:00.930 killing process with pid 3718718 00:37:00.930 19:10:12 keyring_file -- common/autotest_common.sh@965 -- # kill 3718718 00:37:00.930 [2024-07-25 19:10:12.663429] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:00.930 19:10:12 keyring_file -- common/autotest_common.sh@970 -- # wait 3718718 00:37:01.188 00:37:01.188 real 0m13.956s 00:37:01.188 user 0m35.004s 00:37:01.188 sys 0m3.217s 00:37:01.188 19:10:13 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:01.188 19:10:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:01.188 ************************************ 00:37:01.188 END TEST keyring_file 00:37:01.188 ************************************ 00:37:01.447 19:10:13 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:01.447 19:10:13 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:01.447 19:10:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:01.447 19:10:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:01.447 19:10:13 -- common/autotest_common.sh@10 -- # set +x 00:37:01.447 ************************************ 00:37:01.447 START TEST keyring_linux 00:37:01.447 ************************************ 00:37:01.447 19:10:13 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:01.447 * Looking for test storage... 00:37:01.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.447 19:10:13 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.447 19:10:13 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.447 19:10:13 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.447 19:10:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.447 19:10:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.447 19:10:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.447 19:10:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:01.447 19:10:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:01.447 /tmp/:spdk-test:key0 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:01.447 19:10:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:01.447 19:10:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:01.447 /tmp/:spdk-test:key1 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3720541 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:01.447 19:10:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3720541 00:37:01.447 19:10:13 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3720541 ']' 00:37:01.448 19:10:13 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.448 19:10:13 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:01.448 19:10:13 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.448 19:10:13 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:01.448 19:10:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:01.448 [2024-07-25 19:10:13.269294] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:37:01.448 [2024-07-25 19:10:13.269380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720541 ] 00:37:01.448 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.705 [2024-07-25 19:10:13.338151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.705 [2024-07-25 19:10:13.429006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:01.963 19:10:13 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:01.963 [2024-07-25 19:10:13.685739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.963 null0 00:37:01.963 [2024-07-25 19:10:13.717793] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:01.963 [2024-07-25 19:10:13.718301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:01.963 19:10:13 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:01.963 311910075 00:37:01.963 19:10:13 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:01.963 423745123 00:37:01.963 19:10:13 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3720671 00:37:01.963 19:10:13 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:01.963 19:10:13 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3720671 /var/tmp/bperf.sock 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3720671 ']' 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:01.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:01.963 19:10:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:01.963 [2024-07-25 19:10:13.782284] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:37:01.963 [2024-07-25 19:10:13.782359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720671 ] 00:37:01.963 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.222 [2024-07-25 19:10:13.841917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.222 [2024-07-25 19:10:13.932197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.222 19:10:13 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:02.222 19:10:13 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:02.222 19:10:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:02.222 19:10:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:02.480 19:10:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:02.480 19:10:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:02.738 19:10:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:02.738 19:10:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:02.996 [2024-07-25 19:10:14.772456] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:02.996 nvme0n1 00:37:02.996 19:10:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:02.996 19:10:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:02.996 19:10:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:02.996 19:10:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:02.996 19:10:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.996 19:10:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:03.254 19:10:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:03.254 19:10:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:03.254 19:10:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:03.254 19:10:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:03.254 19:10:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.254 19:10:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.254 19:10:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:03.512 19:10:15 keyring_linux -- keyring/linux.sh@25 -- # sn=311910075 00:37:03.512 19:10:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:03.512 19:10:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:03.512 19:10:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 311910075 == \3\1\1\9\1\0\0\7\5 ]] 00:37:03.512 19:10:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 311910075 00:37:03.512 19:10:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:03.512 19:10:15 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:03.770 Running I/O for 1 seconds... 00:37:04.703 00:37:04.703 Latency(us) 00:37:04.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.704 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:04.704 nvme0n1 : 1.01 7538.43 29.45 0.00 0.00 16845.02 6941.96 25049.32 00:37:04.704 =================================================================================================================== 00:37:04.704 Total : 7538.43 29.45 0.00 0.00 16845.02 6941.96 25049.32 00:37:04.704 0 00:37:04.704 19:10:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:04.704 19:10:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:04.962 19:10:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:04.962 19:10:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:04.962 19:10:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:04.962 19:10:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:04.962 19:10:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.962 19:10:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:05.220 19:10:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:05.220 19:10:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:05.220 19:10:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:05.220 19:10:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.220 19:10:17 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:05.220 19:10:17 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.220 19:10:17 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:05.220 19:10:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:05.220 19:10:17 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:05.220 19:10:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:05.220 19:10:17 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.220 19:10:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:05.478 [2024-07-25 19:10:17.254367] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:05.478 [2024-07-25 19:10:17.254378] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:05.478 [2024-07-25 19:10:17.255336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f18270 (9): Bad file descriptor 00:37:05.478 [2024-07-25 19:10:17.256349] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:05.478 [2024-07-25 19:10:17.256368] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:05.478 [2024-07-25 19:10:17.256382] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:05.478 request: 00:37:05.478 { 00:37:05.478 "name": "nvme0", 00:37:05.479 "trtype": "tcp", 00:37:05.479 "traddr": "127.0.0.1", 00:37:05.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:05.479 "adrfam": "ipv4", 00:37:05.479 "trsvcid": "4420", 00:37:05.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.479 "psk": ":spdk-test:key1", 00:37:05.479 "method": "bdev_nvme_attach_controller", 00:37:05.479 "req_id": 1 00:37:05.479 } 00:37:05.479 Got JSON-RPC error response 00:37:05.479 response: 00:37:05.479 { 00:37:05.479 "code": -5, 00:37:05.479 "message": "Input/output error" 00:37:05.479 } 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@33 -- # sn=311910075 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 311910075 00:37:05.479 1 links removed 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@33 -- # sn=423745123 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 423745123 00:37:05.479 1 links removed 00:37:05.479 19:10:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3720671 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3720671 ']' 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3720671 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3720671 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3720671' 00:37:05.479 killing process with pid 3720671 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@965 -- # kill 3720671 00:37:05.479 Received shutdown signal, test time was about 1.000000 seconds 00:37:05.479 00:37:05.479 Latency(us) 00:37:05.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.479 =================================================================================================================== 00:37:05.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:05.479 19:10:17 keyring_linux -- common/autotest_common.sh@970 -- # wait 3720671 00:37:05.737 19:10:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3720541 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3720541 ']' 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3720541 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3720541 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3720541' 00:37:05.737 killing process with pid 3720541 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@965 -- # kill 3720541 00:37:05.737 19:10:17 keyring_linux -- common/autotest_common.sh@970 -- # wait 3720541 00:37:06.304 00:37:06.304 real 0m4.903s 00:37:06.304 user 0m9.311s 00:37:06.304 sys 0m1.628s 00:37:06.304 19:10:17 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:06.304 19:10:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:06.304 ************************************ 00:37:06.304 END TEST keyring_linux 00:37:06.304 ************************************ 00:37:06.304 19:10:18 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:06.304 19:10:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:06.304 19:10:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:06.304 19:10:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:06.304 19:10:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:06.304 19:10:18 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:06.304 19:10:18 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:06.304 19:10:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:06.305 19:10:18 -- common/autotest_common.sh@10 -- # set +x 00:37:06.305 19:10:18 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:06.305 19:10:18 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:06.305 19:10:18 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:06.305 19:10:18 -- common/autotest_common.sh@10 -- # set +x 00:37:08.206 INFO: APP EXITING 00:37:08.206 INFO: killing all VMs 00:37:08.206 INFO: killing vhost app 00:37:08.206 INFO: EXIT DONE 00:37:09.143 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:09.143 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:09.143 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:09.143 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:09.143 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:09.143 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:09.143 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:09.143 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:09.143 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:09.143 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:09.143 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:09.143 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:09.143 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:09.143 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:09.401 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:09.401 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:09.401 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:10.335 Cleaning 00:37:10.335 Removing: /var/run/dpdk/spdk0/config 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:10.335 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:10.335 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:10.593 Removing: /var/run/dpdk/spdk1/config 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:10.593 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:10.593 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:10.593 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:10.593 Removing: /var/run/dpdk/spdk2/config 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:10.593 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:10.593 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:10.593 Removing: /var/run/dpdk/spdk3/config 00:37:10.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:10.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:10.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:10.593 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:10.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:10.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:10.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:10.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:10.594 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:10.594 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:10.594 Removing: /var/run/dpdk/spdk4/config 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:10.594 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:10.594 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:10.594 Removing: /dev/shm/bdev_svc_trace.1 00:37:10.594 Removing: /dev/shm/nvmf_trace.0 00:37:10.594 Removing: /dev/shm/spdk_tgt_trace.pid3400473 00:37:10.594 Removing: /var/run/dpdk/spdk0 00:37:10.594 Removing: /var/run/dpdk/spdk1 00:37:10.594 Removing: /var/run/dpdk/spdk2 00:37:10.594 Removing: /var/run/dpdk/spdk3 00:37:10.594 Removing: /var/run/dpdk/spdk4 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3398927 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3399657 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3400473 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3400910 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3401596 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3401736 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3402450 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3402462 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3402704 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3404010 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3405419 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3405647 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3405918 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3406118 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3406310 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3406466 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3406626 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3406808 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3407395 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3409742 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3409910 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3410088 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3410190 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3410506 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3410630 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3410942 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3411066 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3411233 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3411251 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3411458 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3411543 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3411909 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3412064 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3412272 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3412431 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3412567 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3412644 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3412883 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3413068 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3413226 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3413382 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3413656 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3413811 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3413971 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3414127 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3414401 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3414556 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3414712 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3414938 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3415140 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3415301 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3415459 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3415732 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3415888 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3416050 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3416229 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3416481 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3416555 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3416761 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3418931 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3472616 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3475106 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3482065 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3485231 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3487593 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3488070 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3495487 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3495544 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3496385 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3497038 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3497689 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3498095 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3498102 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3498250 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3498373 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3498383 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3499038 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3499685 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3500229 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3500630 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3500751 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3500895 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3501771 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3502492 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3507834 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3508111 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3510614 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3514305 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3516352 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3522601 00:37:10.594 Removing: /var/run/dpdk/spdk_pid3528405 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3529598 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3530253 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3540313 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3542406 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3567705 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3570496 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3571668 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3572977 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3573110 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3573138 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3573268 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3573698 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3574897 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3575614 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3576040 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3577661 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3577971 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3578529 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3580920 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3584786 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3588334 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3611335 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3614543 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3618360 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3619309 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3620268 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3622814 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3625162 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3629250 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3629276 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3632137 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3632270 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3632407 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3632794 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3632800 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3633874 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3635055 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3636234 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3637424 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3638606 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3639780 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3643851 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3644537 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3645928 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3646665 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3650253 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3652230 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3655630 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3658963 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3665181 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3669626 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3669628 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3682426 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3682833 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3683310 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3683768 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3684228 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3684728 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3685161 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3685567 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3688056 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3688203 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3691998 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3692168 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3693780 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3698677 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3698686 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3701571 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3702918 00:37:10.852 Removing: /var/run/dpdk/spdk_pid3704363 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3705103 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3706512 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3707497 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3713277 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3713660 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3714051 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3715489 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3715879 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3716275 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3718718 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3718728 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3720179 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3720541 00:37:10.853 Removing: /var/run/dpdk/spdk_pid3720671 00:37:10.853 Clean 00:37:11.111 19:10:22 -- common/autotest_common.sh@1447 -- # return 0 00:37:11.111 19:10:22 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:11.111 19:10:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:11.111 19:10:22 -- common/autotest_common.sh@10 -- # set +x 00:37:11.111 19:10:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:11.111 19:10:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:11.111 19:10:22 -- common/autotest_common.sh@10 -- # set +x 00:37:11.111 19:10:22 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:11.111 19:10:22 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:11.111 19:10:22 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:11.111 19:10:22 -- spdk/autotest.sh@391 -- # hash lcov 00:37:11.111 19:10:22 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:11.111 19:10:22 -- spdk/autotest.sh@393 -- # hostname 00:37:11.111 19:10:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:11.369 geninfo: WARNING: invalid characters removed from testname! 00:37:50.098 19:10:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:50.098 19:11:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:52.627 19:11:04 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:55.910 19:11:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:58.437 19:11:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:01.716 19:11:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:04.243 19:11:15 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:04.243 19:11:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:04.243 19:11:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:04.243 19:11:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:04.243 19:11:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:04.243 19:11:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.243 19:11:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.243 19:11:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.243 19:11:15 -- paths/export.sh@5 -- $ export PATH 00:38:04.243 19:11:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.243 19:11:15 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:04.243 19:11:15 -- common/autobuild_common.sh@440 -- $ date +%s 00:38:04.243 19:11:15 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721927475.XXXXXX 00:38:04.243 19:11:15 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721927475.Kqr6eL 00:38:04.243 19:11:15 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:38:04.243 19:11:15 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:38:04.243 19:11:15 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:04.243 19:11:15 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:04.243 19:11:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:04.243 19:11:15 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:04.243 19:11:15 -- common/autobuild_common.sh@456 -- $ get_config_params 00:38:04.243 19:11:15 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:04.243 19:11:15 -- common/autotest_common.sh@10 -- $ set +x 00:38:04.243 19:11:15 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:04.244 19:11:15 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:38:04.244 19:11:15 -- pm/common@17 -- $ local monitor 00:38:04.244 19:11:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.244 19:11:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.244 19:11:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.244 19:11:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.244 19:11:15 -- pm/common@21 -- $ date +%s 00:38:04.244 19:11:15 -- pm/common@21 -- $ date +%s 00:38:04.244 19:11:15 -- pm/common@25 -- $ sleep 1 00:38:04.244 19:11:15 -- pm/common@21 -- $ date +%s 00:38:04.244 19:11:15 -- pm/common@21 -- $ date +%s 00:38:04.244 19:11:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721927475 00:38:04.244 19:11:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721927475 00:38:04.244 19:11:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721927475 00:38:04.244 19:11:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721927475 00:38:04.244 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721927475_collect-vmstat.pm.log 00:38:04.244 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721927475_collect-cpu-temp.pm.log 00:38:04.244 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721927475_collect-cpu-load.pm.log 00:38:04.244 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721927475_collect-bmc-pm.bmc.pm.log 00:38:05.181 19:11:16 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:38:05.181 19:11:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:05.181 19:11:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:05.181 19:11:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:05.181 19:11:16 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:05.181 19:11:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:05.181 19:11:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:05.181 19:11:16 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:05.181 19:11:16 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:05.181 19:11:16 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:05.182 19:11:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:05.182 19:11:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:05.182 19:11:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:05.182 19:11:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:05.182 19:11:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:05.182 19:11:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:05.182 19:11:16 -- pm/common@44 -- $ pid=3731769 00:38:05.182 19:11:16 -- pm/common@50 -- $ kill -TERM 3731769 00:38:05.182 19:11:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:05.182 19:11:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:05.182 19:11:16 -- pm/common@44 -- $ pid=3731771 00:38:05.182 19:11:16 -- pm/common@50 -- $ kill -TERM 3731771 00:38:05.182 19:11:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:05.182 19:11:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:05.182 19:11:16 -- pm/common@44 -- $ pid=3731773 00:38:05.182 19:11:16 -- pm/common@50 -- $ kill -TERM 3731773 00:38:05.182 19:11:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:05.182 19:11:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:05.182 19:11:16 -- pm/common@44 -- $ pid=3731796 00:38:05.182 19:11:16 -- pm/common@50 -- $ sudo -E kill -TERM 3731796 00:38:05.182 + [[ -n 3295680 ]] 00:38:05.182 + sudo kill 3295680 00:38:05.193 [Pipeline] } 00:38:05.211 [Pipeline] // stage 00:38:05.217 [Pipeline] } 00:38:05.235 [Pipeline] // timeout 00:38:05.241 [Pipeline] } 00:38:05.257 [Pipeline] // catchError 00:38:05.263 [Pipeline] } 00:38:05.281 [Pipeline] // wrap 00:38:05.288 [Pipeline] } 00:38:05.304 [Pipeline] // catchError 00:38:05.314 [Pipeline] stage 00:38:05.316 [Pipeline] { (Epilogue) 00:38:05.331 [Pipeline] catchError 00:38:05.332 [Pipeline] { 00:38:05.348 [Pipeline] echo 00:38:05.350 Cleanup processes 00:38:05.356 [Pipeline] sh 00:38:05.645 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:05.645 3731898 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:05.645 3732035 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:05.660 [Pipeline] sh 00:38:05.945 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:05.946 ++ grep -v 'sudo pgrep' 00:38:05.946 ++ awk '{print $1}' 00:38:05.946 + sudo kill -9 3731898 00:38:05.958 [Pipeline] sh 00:38:06.241 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:16.254 [Pipeline] sh 00:38:16.542 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:16.542 Artifacts sizes are good 00:38:16.560 [Pipeline] archiveArtifacts 00:38:16.568 Archiving artifacts 00:38:16.789 [Pipeline] sh 00:38:17.074 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:17.093 [Pipeline] cleanWs 00:38:17.103 [WS-CLEANUP] Deleting project workspace... 00:38:17.103 [WS-CLEANUP] Deferred wipeout is used... 00:38:17.111 [WS-CLEANUP] done 00:38:17.114 [Pipeline] } 00:38:17.136 [Pipeline] // catchError 00:38:17.150 [Pipeline] sh 00:38:17.431 + logger -p user.info -t JENKINS-CI 00:38:17.440 [Pipeline] } 00:38:17.457 [Pipeline] // stage 00:38:17.463 [Pipeline] } 00:38:17.480 [Pipeline] // node 00:38:17.487 [Pipeline] End of Pipeline 00:38:17.529 Finished: SUCCESS